2 research outputs found
Natural Language QA Approaches using Reasoning with External Knowledge
Question answering (QA) in natural language (NL) has been an important aspect
of AI from its early days. Winograd's ``councilmen'' example in his 1972 paper
and McCarthy's Mr. Hug example of 1976 highlights the role of external
knowledge in NL understanding. While Machine Learning has been the go-to
approach in NL processing as well as NL question answering (NLQA) for the last
30 years, recently there has been an increasingly emphasized thread on NLQA
where external knowledge plays an important role. The challenges inspired by
Winograd's councilmen example, and recent developments such as the Rebooting AI
book, various NLQA datasets, research on knowledge acquisition in the NLQA
context, and their use in various NLQA models have brought the issue of NLQA
using ``reasoning'' with external knowledge to the forefront. In this paper, we
present a survey of the recent work on them. We believe our survey will help
establish a bridge between multiple fields of AI, especially between (a) the
traditional fields of knowledge representation and reasoning and (b) the field
of NL understanding and NLQA.Comment: 6 pages, 3 figures, Work in Progres
A Survey on Explainability in Machine Reading Comprehension
This paper presents a systematic review of benchmarks and approaches for
explainability in Machine Reading Comprehension (MRC). We present how the
representation and inference challenges evolved and the steps which were taken
to tackle these challenges. We also present the evaluation methodologies to
assess the performance of explainable systems. In addition, we identify
persisting open research questions and highlight critical directions for future
work