So far in the semester, we have been looking at supervised learning algorithms. Now we will look at the setting where we have to learn despite not having much or any labeled data. Most of this lecture will be focused on the expectation maximization algorithm and its variants.
We will end this part by looking at the connections between EM and the popular K means algorithm.
Links and Resources
-
The Naive Bayes Model, Maximum-Likelihood Estimation, and the EM Algorithm: Notes by Michael Collins