Vivek Srikumar and Christopher D. Manning
Advances in Neural Information Processing Systems, 2014.
[Spotlight presentation]
[Spotlight presentation]
Abstract
In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across inputs. However, the predicted outputs (labels, and more generally structures) are still treated as discrete objects even though outputs are often not discrete units of meaning. In this paper, we present a new formulation for structured prediction where we represent individual labels in a structure as dense vectors and allow semantically similar labels to share parameters. We extend this representation to larger structures by defining compositionality using tensor products to give a natural generalization of standard structured prediction approaches. We define a learning objective for jointly learning the model parameters and the label vectors and propose an alternating minimization algorithm for learning. We show that our formulation outperforms structural SVM baselines in two tasks: multiclass document classification and part-of-speech tagging.
Links
- Link to paper
- Poster
- Summary slides (spotlight presentation at the conference)
- Software
- Supplementary material
- See on Google Scholar
Bib Entry
@inproceedings{srikumar2014learning, author = {Srikumar, Vivek and Manning, Christopher D.}, title = {{Learning Distributed Representations for Structured Output Prediction}}, booktitle = {Advances in Neural Information Processing Systems}, year = {2014} }