Explainable Conversational Question Answering over Heterogeneous Sources
Demo
Please use our code for comparison purposes, and not this demo. This demo only showcases the general workflow of EXPLAIGNN, and how the intermediate results enhance explainability. This demo runs on a CPU and not a GPU, for which the pipeline had to be adjusted a bit for efficiency concerns (smaller heterogeneous answering graphs, less iterations,...).
Description
In conversational question answering, users express their information needs through a series of utterances with incomplete context. Typical ConvQA methods rely on a single source (a knowledge base (KB), or a text corpus, or a set of tables), thus being unable to benefit from increased answer coverage and redundancy of multiple sources. Our method EXPLAIGNN overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. It constructs a heterogeneous graph from entities and evidence snippets retrieved from a KB, a text corpus, web tables, and infoboxes. This large graph is then iteratively reduced via graph neural networks that incorporate question-level attention, until the best answers and their explanations are distilled. Experiments show that EXPLAIGNN improves performance over state-of-the-art baselines. A user study demonstrates that derived answers are understandable by end users.