Demo


Please use our code for comparison purposes, and not this demo. This demo only showcases the general workflow of EXPLAIGNN, and how the intermediate results enhance explainability. This demo runs on a CPU and not a GPU, for which the pipeline had to be adjusted a bit for efficiency concerns (smaller heterogeneous answering graphs, less iterations,...).

Description

In conversational question answering, users express their information needs through a series of utterances with incomplete context. Typical ConvQA methods rely on a single source (a knowledge base (KB), or a text corpus, or a set of tables), thus being unable to benefit from increased answer coverage and redundancy of multiple sources. Our method EXPLAIGNN overcomes these limitations by integrating information from a mixture of sources with user-comprehensible explanations for answers. It constructs a heterogeneous graph from entities and evidence snippets retrieved from a KB, a text corpus, web tables, and infoboxes. This large graph is then iteratively reduced via graph neural networks that incorporate question-level attention, until the best answers and their explanations are distilled. Experiments show that EXPLAIGNN improves performance over state-of-the-art baselines. A user study demonstrates that derived answers are understandable by end users.

EXPLAIGNN

This page is on our 2023 SIGIR full paper on "Explainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks".
GitHub link to EXPLAIGNN code Directly download EXPLAIGNN code

Paper

"Explainable Conversational Question Answering over Heterogeneous Sources via Iterative Graph Neural Networks", Philipp Christmann, Rishiraj Saha Roy, and Gerhard Weikum. In SIGIR '23, Taipei, Taiwan, 23 - 27 July 2023.
[Preprint] [Code] [Slides] [Video] [User study]

Contact

For feedback and clarifications, please contact:
To know more about our group, please visit our website:
https://qa.mpi-inf.mpg.de/.