Tao Li, Vivek Gupta, Maitrey Mehta and Vivek Srikumar
EMNLP 2019.

Abstract

While neural models show remarkable accuracy on individual predictions, their internal beliefs can be inconsistent across examples. In this paper, we formalize such inconsistency as a generalization of prediction error. We propose a learning framework for constraining models using logic rules to regularize them away from inconsistency. Our framework can leverage both labeled and unlabeled examples and is directly compatible with off-the-shelf learning schemes without model redesign. We instantiate our framework on natural language inference, where experiments show that enforcing invariants stated in logic can help make the predictions of neural models both accurate and consistent.

Links

Bib Entry

 
  @inproceedings{li2019logic-driven,
  author  = {Li, Tao and Gupta, Vivek and Mehta, Maitrey and Srikumar, Vivek},
  title = {{A Logic-Driven Framework for Consistency of Neural Models}},
  booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year = {2019}
}