This lecture covers various algorithms that we can use if we do not have full supervision of our structured output.
Lecture and readings
- Lecture slides
- [video] (The EM algorithm)
- Some notes on the EM algorithm:
-
Neal, Radford M., and Geoffrey E. Hinton. “A view of the EM algorithm that justifies incremental, sparse, and other variants.” In Learning in graphical models, pp. 355-368. Springer, Dordrecht, 1998.
-
(*) Chun-Nam John Yu and Thorsten Joachims, Learning Structural SVMs with Latent Variables, ICML 2009
-
(*) Noah A. Smith and Jason Eisner, Contrastive Estimation: Training Log-Linear Models on Unlabeled Data, ACL 2005.
-
(*) Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan, Object Detection with Discriminatively Trained Part Based Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010
-
(*) Ming-Wei Chang, Dan Goldwasser, Dan Roth, Vivek Srikumar, Discriminative learning over constrained latent representations, NAACL 2010
-
(*) Ming-Wei Chang, Vivek Srikumar, Dan Goldwasser and Dan Roth, Structured Output Learning with Indirect Supervision, ICML 2010
- (*) Ariadna Quattoni, Michael Collins and Trevor Darrell, Conditional Random Fields for Object Recognition, NIPS 2005.