Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), 2012.
Abstract
This paper presents a model that extends semantic role labeling. Existing approaches independently analyze relations expressed by verb predicates or those expressed as nominalizations. However, sentences express relations via other linguistic phenomena as well. Furthermore, these phenomena interact with each other, thus restricting the structures they articulate. In this paper, we use this intuition to define a joint inference model that captures the inter-dependencies between verb semantic role labeling and relations expressed using prepositions. The scarcity of jointly labeled data presents a crucial technical challenge for learning a joint model. The key strength of our model is that we use existing structure predictors as black boxes. By enforcing consistency constraints between their predictions, we show improvements in the performance of both tasks without retraining the individual models.
Links
Bib Entry
@inproceedings{clarke2012nlp-curator, author = {Clarke, James and Srikumar, Vivek and Sammons, Mark and Roth, Dan}, title = {{An NLP Curator (or: How I Learned to Stop Worrying and Love NLP Pipelines)}}, booktitle = {Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)}, year = {2012} }