# Reading 18: Dimensionality Reduction#

*For the class on Monday, April 8th*

## Reading assignments#

Read the following sections of [ICVG20] (note that many subsections are skipped):

Chap. 7 “Dimensionality and Its Reduction”

Sec. 7.1 “The Curse of Dimensionality”

Sec. 7.2 “The Data Sets Used in This Chapter”

Sec. 7.3 “Principal Component Analysis”

Sec. 7.3.2 “The Application of PCA”

*(You can skip all other subsections under 7.3)*

*(Skip Sec. 7.4)*Sec. 7.5 “Manifold Learning”

Sec. 7.5.1 “Locally Linear Embedding”

*(first paragraph only; rest is optional)**(You can skip all other subsections under 7.5)*

## Questions#

Hint

Submit your answer on Canvas. Due at noon, Monday, April 8th.

List anything from your reading that confuses you. Explain why they confuse you.

**You are strongly encouraged to think about what questions you have about the reading**, but if you really have no questions at all, please briefly summarize what you have learned from this reading assignment.The PCA is a way to transform the input features.

What is the constraint on the transformation? (Can it be any possible transformation, or of only a certain kind?)

What does the transformation aim to achieve? (After the transformation, what does the first principal component correspond to?)

## Discussion Preview#

Note

We will discuss the following in class. They are included here so that you have a chance to think about them before class.
You need *not* submit your answers as part of this assignment.

Dimensionality reduction is sometimes called “embedding,” especially in modern ML. If we think of PCA as a standard technique of dimensionality reduction, then we can think of embedding as a more general and flexible way to achieve a similar goal. We will discuss the concept of “embedding” and its examples.