3,008 research outputs found
Natural Language Interfaces to Data
Recent advances in NLU and NLP have resulted in renewed interest in natural
language interfaces to data, which provide an easy mechanism for non-technical
users to access and query the data. While early systems evolved from keyword
search and focused on simple factual queries, the complexity of both the input
sentences as well as the generated SQL queries has evolved over time. More
recently, there has also been a lot of focus on using conversational interfaces
for data analytics, empowering a line of non-technical users with quick
insights into the data. There are three main challenges in natural language
querying (NLQ): (1) identifying the entities involved in the user utterance,
(2) connecting the different entities in a meaningful way over the underlying
data source to interpret user intents, and (3) generating a structured query in
the form of SQL or SPARQL.
There are two main approaches for interpreting a user's NLQ. Rule-based
systems make use of semantic indices, ontologies, and KGs to identify the
entities in the query, understand the intended relationships between those
entities, and utilize grammars to generate the target queries. With the
advances in deep learning (DL)-based language models, there have been many
text-to-SQL approaches that try to interpret the query holistically using DL
models. Hybrid approaches that utilize both rule-based techniques as well as DL
models are also emerging by combining the strengths of both approaches.
Conversational interfaces are the next natural step to one-shot NLQ by
exploiting query context between multiple turns of conversation for
disambiguation. In this article, we review the background technologies that are
used in natural language interfaces, and survey the different approaches to
NLQ. We also describe conversational interfaces for data analytics and discuss
several benchmarks used for NLQ research and evaluation.Comment: The full version of this manuscript, as published by Foundations and
Trends in Databases, is available at http://dx.doi.org/10.1561/190000007
Learning to Map Natural Language to Executable Programs Over Databases
Natural language is a fundamental form of information and communication and is becoming the next frontier in computer interfaces. As the amount of data available online has increased exponentially, so has the need for Natural Language Interfaces (NLIs, which is not used for natural language inference in this thesis) to connect the data and the user by easily using natural language, significantly promoting the possibility and efficiency of information access for many users besides data experts. All consumer-facing software will one day have a dialogue interface, and this is the next vital leap in the evolution of search engines. Such intelligent dialogue systems should understand the meaning of language grounded in various contexts and generate effective language responses in different forms for information requests and human-computer communication.Developing these intelligent systems is challenging due to (1) limited benchmarks to drive advancements, (2) alignment mismatches between natural language and formal programs, (3) lack of trustworthiness and interpretability, (4) context dependencies in both human conversational interactions and the target programs, and (5) joint language understanding between dialog questions and NLI environments (e.g. databases and knowledge graphs). This dissertation presents several datasets, neural algorithms, and language models to address these challenges for developing deep learning technologies for conversational natural language interfaces (more specifically, NLIs to Databases or NLIDB). First, to drive advancements towards neural-based conversational NLIs, we design and propose several complex and cross-domain NLI benchmarks, along with introducing several datasets. These datasets enable training large, deep learning models. The evaluation is done on unseen databases. (e.g., about course arrangement). Systems must generalize well to not only new SQL queries but also to unseen database schemas to perform well on these tasks. Furthermore, in real-world applications, users often access information in a multi-turn interaction with the system by asking a sequence of related questions. The users may explicitly refer to or omit previously mentioned entities and constraints and may introduce refinements, additions, or substitutions to what has already been said. Therefore, some of them require systems to model dialog dynamics and generate natural language explanations for user verification. The full dialogue interaction with the system’s responses is also important as this supports clarifying ambiguous questions, verifying returned results, and notifying users of unanswerable or unrelated questions. A robust dialogue-based NLI system that can engage with users by forming its responses has thus become an increasingly necessary component for the query process. Moreover, this thesis presents the development of scalable algorithms designed to parse complex and sequential questions to formal programs (e.g., mapping questions to SQL queries that can execute against databases). We propose a novel neural model that utilizes type information from knowledge graphs to better understand rare entities and numbers in natural language questions. We also introduce a neural model based on syntax tree neural networks, which was the first methodology proposed for generating complex programs from language. Finally, language modeling creates contextualized vector representations of words by training a model to predict the next word given context words, which are the basis of deep learning for NLP. Recently, pre-trained language models such as BERT and RoBERTa achieve tremendous success in many natural language processing tasks such as text understanding and reading comprehension. However, most language models are pre-trained only on free-text such as Wikipedia articles and Books. Given that language in semantic parsing is usually related to some formal representations such as logic forms and SQL queries and has to be grounded in structural environments (e.g., databases), we propose better language models for NLIs by enforcing such compositional interpolation in them. To show they could better jointly understand dialog questions and NLI environments (e.g. databases and knowledge graphs), we show that these language models achieve new state-of-the-art results for seven representative tasks on semantic parsing, dialogue state tracking, and question answering. Also, our proposed pre-training method is much more effective than other prior work
Look before you Hop: Conversational Question Answering over Knowledge Graphs Using Judicious Context Expansion
Fact-centric information needs are rarely one-shot; users typically ask follow-up questions to explore a topic. In such a conversational setting, the user's inputs are often incomplete, with entities or predicates left out, and ungrammatical phrases. This poses a huge challenge to question answering (QA) systems that typically rely on cues in full-fledged interrogative sentences. As a solution, we develop CONVEX: an unsupervised method that can answer incomplete questions over a knowledge graph (KG) by maintaining conversation context using entities and predicates seen so far and automatically inferring missing or ambiguous pieces for follow-up questions. The core of our method is a graph exploration algorithm that judiciously expands a frontier to find candidate answers for the current question. To evaluate CONVEX, we release ConvQuestions, a crowdsourced benchmark with 11,200 distinct conversations from five different domains. We show that CONVEX: (i) adds conversational support to any stand-alone QA system, and (ii) outperforms state-of-the-art baselines and question completion strategies
TAPAS: Weakly Supervised Table Parsing via Pre-training
Answering natural language questions over tables is usually seen as a
semantic parsing task. To alleviate the collection cost of full logical forms,
one popular approach focuses on weak supervision consisting of denotations
instead of logical forms. However, training semantic parsers from weak
supervision poses difficulties, and in addition, the generated logical forms
are only used as an intermediate step prior to retrieving the denotation. In
this paper, we present TAPAS, an approach to question answering over tables
without generating logical forms. TAPAS trains from weak supervision, and
predicts the denotation by selecting table cells and optionally applying a
corresponding aggregation operator to such selection. TAPAS extends BERT's
architecture to encode tables as input, initializes from an effective joint
pre-training of text segments and tables crawled from Wikipedia, and is trained
end-to-end. We experiment with three different semantic parsing datasets, and
find that TAPAS outperforms or rivals semantic parsing models by improving
state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with
the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model
architecture. We additionally find that transfer learning, which is trivial in
our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the
state-of-the-art.Comment: Accepted to ACL 202
Complex Knowledge Base Question Answering: A Survey
Knowledge base question answering (KBQA) aims to answer a question over a
knowledge base (KB). Early studies mainly focused on answering simple questions
over KBs and achieved great success. However, their performance on complex
questions is still far from satisfactory. Therefore, in recent years,
researchers propose a large number of novel methods, which looked into the
challenges of answering complex questions. In this survey, we review recent
advances on KBQA with the focus on solving complex questions, which usually
contain multiple subjects, express compound relations, or involve numerical
operations. In detail, we begin with introducing the complex KBQA task and
relevant background. Then, we describe benchmark datasets for complex KBQA task
and introduce the construction process of these datasets. Next, we present two
mainstream categories of methods for complex KBQA, namely semantic
parsing-based (SP-based) methods and information retrieval-based (IR-based)
methods. Specifically, we illustrate their procedures with flow designs and
discuss their major differences and similarities. After that, we summarize the
challenges that these two categories of methods encounter when answering
complex questions, and explicate advanced solutions and techniques used in
existing work. Finally, we conclude and discuss several promising directions
related to complex KBQA for future research.Comment: 20 pages, 4 tables, 7 figures. arXiv admin note: text overlap with
arXiv:2105.1164
- …