This lecture gives an overview of methods for mapping words into vector spaces.
Lectures
Examples
Readings
-
Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. “Efficient estimation of word representations in vector space.” arXiv preprint arXiv:1301.3781 (2013).
-
Pennington, Jeffrey, Richard Socher, and Christopher Manning. “Glove: Global vectors for word representation.” In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543. 2014.
-
Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. “Linguistic regularities in continuous space word representations.” In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746-751. 2013.
-
Levy, Omer, and Yoav Goldberg. “Linguistic regularities in sparse and explicit word representations.” In Proceedings of the eighteenth conference on computational natural language learning, pp. 171-180. 2014.
-
Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. “A neural probabilistic language model.” Journal of machine learning research 3, no. Feb (2003): 1137-1155.
-
Collobert, Ronan, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. “Natural language processing (almost) from scratch.” Journal of Machine Learning Research 12, no. Aug (2011): 2493-2537.
-
(*) Levy, Omer, and Yoav Goldberg. “Neural word embedding as implicit matrix factorization.” In Advances in neural information processing systems, pp. 2177-2185. 2014.
-
(*) Faruqui, Manaal, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. “Retrofitting Word Vectors to Semantic Lexicons.” In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1606-1615. 2015.