In this lecture, we will revisit word embeddings, now that we have encountered recurrent neural networks.
Lectures and readings
-
(*) Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. “Deep Contextualized Word Representations.” In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), vol. 1, pp. 2227-2237. 2018.
-
(*) Peters, Matthew, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. “Dissecting Contextual Word Embeddings: Architecture and Representation.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1499-1509. 2018.
-
(*) Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-training of deep bidirectional transformers for language understanding.” arXiv preprint arXiv:1810.04805 (2018).