Mattia Medina Grespan, Ashim Gupta and Vivek Srikumar
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2021.
Abstract
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes. A successful strategy for incorporating such knowledge involves relaxing logical statements into sub-differentiable losses for optimization. In this paper, we study the question of how best to relax logical expressions that represent labeled examples and knowledge about a problem; we focus on sub-differentiable t-norm relaxations of logic. We present theoretical and empirical criteria for characterizing which relaxation would perform best in various scenarios. In our theoretical study driven by the goal of preserving tautologies, the Lukasiewicz t-norm performs best. However, in our empirical analysis on the text chunking and digit recognition tasks, the product t-norm achieves best predictive performance. We analyze this apparent discrepancy, and conclude with a list of best practices for defining loss functions via logic.
Links
Bib Entry
@inproceedings{medina-grespan2021evaluating, author = {Medina-Grespan, Mattia and Gupta, Ashim and Srikumar, Vivek}, title = {{Evaluating Relaxations of Logic for Neural Networks: A Comprehensive Study}}, booktitle = {Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)}, year = {2021} }