Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations, 2018.
Abstract
Neural networks models have gained unprecedented popularity in natural language processing due to their state-of-the-art performance and the flexible end-to-end training scheme. Despite their advantages, the lack of interpretability hinders the deployment and refinement of the models. In this work, we present a flexible visualization library for creating customized visual analytic environments, in which the user can investigate and interrogate the relationships among the input, the model internals (i.e., attention), and the output predictions, which in turn shed light on the model decision-making process.
Links
- Link to paper
- See on Google Scholar
Bib Entry
@inproceedings{liu2018visuala, author = {Liu, Shusen and Li, Tao and Li, Zhimin and Srikumar, Vivek and Pascucci, Valerio and Bremer, Peer-Timo}, title = {{Visual Interrogation of Attention-Based Models for Natural Language Inference and Machine Comprehension}}, booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations}, year = {2018} }