1,256 research outputs found
Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering
In this paper, we propose a novel end-to-end neural architecture for ranking
candidate answers, that adapts a hierarchical recurrent neural network and a
latent topic clustering module. With our proposed model, a text is encoded to a
vector representation from an word-level to a chunk-level to effectively
capture the entire meaning. In particular, by adapting the hierarchical
structure, our model shows very small performance degradations in longer text
comprehension while other state-of-the-art recurrent neural network models
suffer from it. Additionally, the latent topic clustering module extracts
semantic information from target samples. This clustering module is useful for
any text related tasks by allowing each data sample to find its nearest topic
cluster, thus helping the neural network model analyze the entire data. We
evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic
domain question answering dataset, which is related to Samsung products. The
proposed model shows state-of-the-art results for ranking question-answer
pairs.Comment: 10 pages, Accepted as a conference paper at NAACL 201
Natural language processing
Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems
Training Datasets for Machine Reading Comprehension and Their Limitations
Neural networks are a powerful model class to learn machine Reading Comprehen- sion (RC), yet they crucially depend on the availability of suitable training datasets. In this thesis we describe methods for data collection, evaluate the performance of established models, and examine a number of model behaviours and dataset limita- tions. We first describe the creation of a data resource for the science exam QA do- main, and compare existing models on the resulting dataset. The collected ques- tions are plausible – non-experts can distinguish them from real exam questions with 55% accuracy – and using them as additional training data leads to improved model scores on real science exam questions. Second, we describe and apply a distant supervision dataset construction method for multi-hop RC across documents. We identify and mitigate several dataset assembly pitfalls – a lack of unanswerable candidates, label imbalance, and spurious correlations between documents and particular candidates – which often leave shallow predictive cues for the answer. Furthermore we demonstrate that se- lecting relevant document combinations is a critical performance bottleneck on the datasets created. We thus investigate Pseudo-Relevance Feedback, which leads to improvements compared to TF-IDF-based document combination selection both in retrieval metrics and answer accuracy. Third, we investigate model undersensitivity: model predictions do not change when given adversarially altered questions in SQUAD2.0 and NEWSQA, even though they should. We characterise affected samples, and show that the phe- nomenon is related to a lack of structurally similar but unanswerable samples during training: data augmentation reduces the adversarial error rate, e.g. from 51.7% to 20.7% for a BERT model on SQUAD2.0, and improves robustness also in other settings. Finally we explore efficient formal model verification via Interval Bound Propagation (IBP) to measure and address model undersensitivity, and show that using an IBP-derived auxiliary loss can improve verification rates, e.g. from 2.8% to 18.4% on the SNLI test set
Robust Dialog Management Through A Context-centric Architecture
This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users
Question Generation for French: Collating Parsers and Paraphrasing Questions
This article describes a question generation system for French. The transformation of declarative sentences into questions relies on two different syntactic parsers and named entity recognition tools. This makes it possible to further diversify the questions generated and to possibly alleviate the problems inherent to the analysis tools. The system also generates reformulations for the questions based on variations in the question words, inducing answers with different granularities, and nominalisations of action verbs. We evaluate the questions generated for sentences extracted from two different corpora: a corpus of newspaper articles used for the CLEF Question Answering evaluation campaign and a corpus of simplified online encyclopedia articles. The evaluation shows that the system is able to generate a majority of good and medium quality questions. We also present an original evaluation of the question generation system using the question analysis module of a question answering system
Human Mobility Question Answering (Vision Paper)
Question answering (QA) systems have attracted much attention from the
artificial intelligence community as they can learn to answer questions based
on the given knowledge source (e.g., images in visual question answering).
However, the research into question answering systems with human mobility data
remains unexplored. Mining human mobility data is crucial for various
applications such as smart city planning, pandemic management, and personalised
recommendation system. In this paper, we aim to tackle this gap and introduce a
novel task, that is, human mobility question answering (MobQA). The aim of the
task is to let the intelligent system learn from mobility data and answer
related questions. This task presents a new paradigm change in mobility
prediction research and further facilitates the research of human mobility
recommendation systems. To better support this novel research topic, this
vision paper also proposes an initial design of the dataset and a potential
deep learning model framework for the introduced MobQA task. We hope that this
paper will provide novel insights and open new directions in human mobility
research and question answering research
DCQA: Document-Level Chart Question Answering towards Complex Reasoning and Common-Sense Understanding
Visually-situated languages such as charts and plots are omnipresent in
real-world documents. These graphical depictions are human-readable and are
often analyzed in visually-rich documents to address a variety of questions
that necessitate complex reasoning and common-sense responses. Despite the
growing number of datasets that aim to answer questions over charts, most only
address this task in isolation, without considering the broader context of
document-level question answering. Moreover, such datasets lack adequate
common-sense reasoning information in their questions. In this work, we
introduce a novel task named document-level chart question answering (DCQA).
The goal of this task is to conduct document-level question answering,
extracting charts or plots in the document via document layout analysis (DLA)
first and subsequently performing chart question answering (CQA). The newly
developed benchmark dataset comprises 50,010 synthetic documents integrating
charts in a wide range of styles (6 styles in contrast to 3 for PlotQA and
ChartQA) and includes 699,051 questions that demand a high degree of reasoning
ability and common-sense understanding. Besides, we present the development of
a potent question-answer generation engine that employs table data, a rich
color set, and basic question templates to produce a vast array of reasoning
question-answer pairs automatically. Based on DCQA, we devise an OCR-free
transformer for document-level chart-oriented understanding, capable of DLA and
answering complex reasoning and common-sense questions over charts in an
OCR-free manner. Our DCQA dataset is expected to foster research on
understanding visualizations in documents, especially for scenarios that
require complex reasoning for charts in the visually-rich document. We
implement and evaluate a set of baselines, and our proposed method achieves
comparable results
- …