Tao Li and Vivek Srikumar
ACL 2019.
Abstract
Today, the dominant paradigm for training neural networks
involves minimizing task loss on a large dataset. Using world
knowledge to inform a model, and yet retain the ability to
perform end-to-end training remains an open question. In this
paper, we present a novel framework for introducing declarative
knowledge to neural network architectures in order to guide
training and prediction. Our framework systematically compiles
logical statements into computation graphs that augment a
neural network without extra learnable parameters or manual
redesign. We evaluate our modeling strategy on three tasks: ma-
chine comprehension, natural language inference, and text
chunking. Our experiments show that knowledge-augmented networks
can strongly improve over baselines, especially in low-data
regimes.
Links
Bib Entry
@inproceedings{cao2019observing, author = {Li, Tao and Srikumar, Vivek}, title = {Augmenting Neural Networks with First-order Logic}, booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year = {2019} }