Explaining Natural Language Inference with Factual and Template Memory Networks
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
In the era of artificial intelligence, neural models have emerged as a powerful tool for tackling a wide range of tasks. However, these models are commonly regarded as black-box systems, making it difficult to understand their internal workings. The natural language explanation task seeks to elucidate the decisions of a black-box system by generating human-understandable explanations. The task is important for natural language understanding systems in many domains such as in the medical and legal domains. While numerous existing studies are capable of performing the task, they rely on training in an end-to-end fashion, which still limits them to being black-box machinery.
In this work, we focus on the natural language explanation task for natural lan- guage inference. The task aims to explain the relationship between two sentences with text, namely in the tone of entailment, contradiction, or neutral. We propose a memory network that utilizes factual knowledge given by weakly supervised rea- soning and template knowledge extracted by rules and heuristics. Experiments show that our approach achieves state-of-the-art performance on the e-SNLI dataset. Our analyses further verify the roles of both factual and template memories.
