# Reading 17: Density Estimation and Clustering#

*For the class on Wednesday, April 3rd*

## Reading assignments#

Read the following sections of [ICVG20] (note that many subsections are skipped):

Sec. 4.4.1 “Gaussian Mixture Model”

Sec. 4.4.3 “The Basics of the Expectation Maximization Algorithm”

Sec. 6.1.1 “Kernel Density Estimation”

Sec. 6.3.1 “Gaussian Mixture Model”

Sec. 6.4 “Finding Clusters in Data”

Sec. 6.4.1 “General Aspects of Clustering and Unsupervised Learning”

Sec. 6.4.2 “Clustering by Sum-of-Squares Minimization: K-Means”

Sec. 6.4.5 “Clustering Procedurally: Hierarchical Clustering”

## Questions#

Hint

Submit your answer on Canvas. Due at noon, Wednesday, April 3rd.

List anything from your reading that confuses you. Explain why they confuse you.

**You are strongly encouraged to think about what questions you have about the reading**, but if you really have no questions at all, please briefly summarize what you have learned from this reading assignment.How can a good density estimation (of a set of unlabeled data points) help with the clustering task? Use Figure 6.1 (page 246) of [ICVG20] as an example to explain.

## Discussion Preview#

Note

We will discuss the following questions in class. They are included here so that you have a chance to think about them before class.
You need *not* submit your answers as part of this assignment.

We will compare different clustering methods and the general principles behind these methods.

We will discuss a few ways that we can evaluate clustering methods.