IEEE Transactions on Visualization and Computer Graphics, volume 25, 1, 2019.
Abstract
With the recent advances in deep learning, neural network models have obtained state-of-the-art performances for many linguistic tasks in natural language processing. However, this rapid progress also brings enormous challenges. The opaque nature of a neural network model leads to hard-to-debug-systems and difficult-to-interpret mechanisms. Here, we introduce a visualization system that, through a tight yet flexible integration between visualization elements and the underlying model, allows a user to interrogate the model by perturbing the input, internal state, and prediction while observing changes in other parts of the pipeline. We use the natural language inference problem as an example to illustrate how a perturbation-driven paradigm can help domain experts assess the potential limitation of a model, probe its inner states, and interpret and form hypotheses about fundamental model mechanisms such as attention.
Links
- Link to paper
- See on Google Scholar
Bib Entry
@article{liu2019nlize, author = {Liu, Shusen and Li, Zhimin and Li, Tao and Srikumar, Vivek and Pascucci, Valerio and Bremer, Peer-Timo}, title = {{NLIZE: A Perturbation-Driven Visual Interrogation Tool for Analyzing and Interpreting Natural Language Inference Models}}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2019}, volume = {25} }