69 research outputs found
Recommended from our members
Proceedings of QG2010: The Third Workshop on Question Generation
These are the peer-reviewed proceedings of "QG2010, The Third Workshop on Question Generation". The workshop included a special track for "QGSTEC2010: The First Question Generation Shared Task and Evaluation Challenge".
QG2010 was held as part of The Tenth International Conference on Intelligent Tutoring Systems (ITS2010)
Encyclopaedic question answering
Open-domain question answering (QA) is an established NLP task which enables users to search for speciVc pieces of information in large collections of texts. Instead of using keyword-based queries and a standard information retrieval engine, QA systems allow the use of natural language questions and return the exact answer (or a list of plausible answers) with supporting snippets of text. In the past decade, open-domain QA research has been dominated by evaluation fora such as TREC and CLEF, where shallow techniques relying on information redundancy have achieved very good performance. However, this performance is generally limited to simple factoid and deVnition questions because the answer is usually explicitly present in the document collection. Current approaches are much less successful in Vnding implicit answers and are diXcult to adapt to more complex question types which are likely to be posed by users. In order to advance the Veld of QA, this thesis proposes a shift in focus from simple factoid questions to encyclopaedic questions: list questions composed of several constraints. These questions have more than one correct answer which usually cannot be extracted from one small snippet of text. To correctly interpret the question, systems need to combine classic knowledge-based approaches with advanced NLP techniques. To Vnd and extract answers, systems need to aggregate atomic facts from heterogeneous sources as opposed to simply relying on keyword-based similarity. Encyclopaedic questions promote QA systems which use basic reasoning, making them more robust and easier to extend with new types of constraints and new types of questions. A novel semantic architecture is proposed which represents a paradigm shift in open-domain QA system design, using semantic concepts and knowledge representation instead of words and information retrieval. The architecture consists of two phases, analysis – responsible for interpreting questions and Vnding answers, and feedback – responsible for interacting with the user. This architecture provides the basis for EQUAL, a semantic QA system developed as part of the thesis, which uses Wikipedia as a source of world knowledge and iii employs simple forms of open-domain inference to answer encyclopaedic questions. EQUAL combines the output of a syntactic parser with semantic information from Wikipedia to analyse questions. To address natural language ambiguity, the system builds several formal interpretations containing the constraints speciVed by the user and addresses each interpretation in parallel. To Vnd answers, the system then tests these constraints individually for each candidate answer, considering information from diUerent documents and/or sources. The correctness of an answer is not proved using a logical formalism, instead a conVdence-based measure is employed. This measure reWects the validation of constraints from raw natural language, automatically extracted entities, relations and available structured and semi-structured knowledge from Wikipedia and the Semantic Web. When searching for and validating answers, EQUAL uses the Wikipedia link graph to Vnd relevant information. This method achieves good precision and allows only pages of a certain type to be considered, but is aUected by the incompleteness of the existing markup targeted towards human readers. In order to address this, a semantic analysis module which disambiguates entities is developed to enrich Wikipedia articles with additional links to other pages. The module increases recall, enabling the system to rely more on the link structure of Wikipedia than on word-based similarity between pages. It also allows authoritative information from diUerent sources to be linked to the encyclopaedia, further enhancing the coverage of the system. The viability of the proposed approach was evaluated in an independent setting by participating in two competitions at CLEF 2008 and 2009. In both competitions, EQUAL outperformed standard textual QA systems as well as semi-automatic approaches. Having established a feasible way forward for the design of open-domain QA systems, future work will attempt to further improve performance to take advantage of recent advances in information extraction and knowledge representation, as well as by experimenting with formal reasoning and inferencing capabilities.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
On the integration of conceptual hierarchies with deep learning for explainable open-domain question answering
Question Answering, with its potential to make human-computer interactions more intuitive, has had a revival in recent years with the influx of deep learning methods into natural language processing and the simultaneous adoption of personal assistants such as Siri, Google Now, and Alexa. Unfortunately, Question Classification, an essential element of question answering, which classifies questions based on the class of the expected answer had been overlooked. Although the task of question classification was explicitly developed for use in question answering systems, the more advanced task of question classification, which classifies questions into between fifty and a hundred question classes, had developed into independent tasks with no application in question answering.
The work presented in this thesis bridges this gap by making use of fine-grained question classification for answer selection, arguably the most challenging subtask of question answering, and hence the defacto standard of measure of its performance on question answering. The use of question classification in a downstream task required significant improvement to question classification, which was achieved in this work by integrating linguistic information and deep learning through what we call Types, a novel method of representing Concepts.
Our work on a purely rule-based system for fine-grained Question Classification using Types achieved an accuracy of 97.2%, close to a 6 point improvement over the previous state of the art and has remained state of the art in question classification for over two years. The integration of these question classes and a deep learning model for Answer Selection resulted in MRR and MAP scores which outperform the current state of the art by between 3 and 5 points on both versions of a standard test set
Advanced techniques for personalized, interactive question answering
Using a computer to answer questions has been a human dream since the beginning of
the digital era. A first step towards the achievement of such an ambitious goal is to deal
with naturallangilage to enable the computer to understand what its user asks.
The discipline that studies the conD:ection between natural language and the represen~
tation of its meaning via computational models is computational linguistics. According
to such discipline, Question Answering can be defined as the task that, given a question
formulated in natural language, aims at finding one or more concise answers in the form
of sentences or phrases.
Question Answering can be interpreted as a sub-discipline of information retrieval
with the added challenge of applying sophisticated techniques to identify the complex
syntactic and semantic relationships present in text. Although it is widely accepted that
Question Answering represents a step beyond standard infomiation retrieval, allowing a
more sophisticated and satisfactory response to the user's information needs, it still shares
a series of unsolved issues with the latter.
First, in most state-of-the-art Question Answering systems, the results are created
independently of the questioner's characteristics, goals and needs. This is a serious limitation
in several cases: for instance, a primary school child and a History student may
need different answers to the questlon: When did, the Middle Ages begin?
Moreover, users often issue queries not as standalone but in the context of a wider
information need, for instance when researching a specific topic. Although it has recently been proposed that providing Question Answering systems with dialogue interfaces
would encourage and accommodate the submission of multiple related questions
and handle the user's requests for clarification, interactive Question Answering is still at
its early stages:
Furthermore, an i~sue which still remains open in current Question Answering is
that of efficiently answering complex questions, such as those invoking definitions and
descriptions (e.g. What is a metaphor?). Indeed, it is difficult to design criteria to assess
the correctness of answers to such complex questions.
.. These are the central research problems addressed by this thesis, and are solved as
follows.
An in-depth study on complex Question Answering led to the development of classifiers
for complex answers. These exploit a variety of lexical, syntactic and shallow
semantic features to perform textual classification using tree-~ernel functions for Support
Vector Machines.
The issue of personalization is solved by the integration of a User Modelling corn':
ponent within the the Question Answering model. The User Model is able to filter and
fe-rank results based on the user's reading level and interests.
The issue ofinteractivity is approached by the development of a dialogue model and a
dialogue manager suitable for open-domain interactive Question Answering. The utility
of such model is corroborated by the integration of an interactive interface to allow reference
resolution and follow-up conversation into the core Question Answerin,g system and
by its evaluation.
Finally, the models of personalized and interactive Question Answering are integrated
in a comprehensive framework forming a unified model for future Question Answering
research
Using natural language processing for question answering in closed and open domains
With regard to the growth in the amount of social, environmental, and biomedical information available digitally, there is a growing need for Question Answering (QA) systems that can empower users to master this new wealth of information. Despite recent progress in QA, the quality of interpretation and extraction of the desired answer is not adequate. We believe that striving for higher accuracy in QA systems is subject to on-going research, i.e., it is better to have no answer is better than wrong answers. However, there are diverse queries, which the state of the art QA systems cannot interpret and answer properly.
The problem of interpreting a question in a way that could preserve its syntactic-semantic structure is considered as one of the most important challenges in this area. In this work we focus on the problems of semantic-based QA systems and analyzing the effectiveness of NLP techniques, query mapping, and answer inferencing both in closed (first scenario) and open (second scenario) domains. For this purpose, the architecture of Semantic-based closed and open domain Question Answering System (hereafter “ScoQAS”) over ontology resources is presented with two different prototyping: Ontology-based closed domain and an open domain under Linked Open Data (LOD) resource.
The ScoQAS is based on NLP techniques combining semantic-based structure-feature patterns for question classification and creating a question syntactic-semantic information structure (QSiS). The QSiS provides an actual potential by building constraints to formulate the related terms on syntactic-semantic aspects and generating a question graph (QGraph) which facilitates making inference for getting a precise answer in the closed domain. In addition, our approach provides a convenient method to map the formulated comprehensive information into SPARQL query template to crawl in the LOD resources in the open domain.
The main contributions of this dissertation are as follows:
1. Developing ScoQAS architecture integrated with common and specific components compatible with closed and open domain ontologies.
2. Analysing user’s question and building a question syntactic-semantic information structure (QSiS), which is constituted by several processes of the methodology: question classification, Expected Answer Type (EAT) determination, and generated constraints.
3. Presenting an empirical semantic-based structure-feature pattern for question classification and generalizing heuristic constraints to formulate the relations between the features in the recognized pattern in terms of syntactical and semantical.
4. Developing a syntactic-semantic QGraph for representing core components of the question.
5. Presenting an empirical graph-based answer inference in the closed domain.
In a nutshell, a semantic-based QA system is presented which provides some experimental results over the closed and open domains. The efficiency of the ScoQAS is evaluated using measures such as precision, recall, and F-measure on LOD challenges in the open domain. We focus on quantitative evaluation in the closed domain scenario. Due to the lack of predefined benchmark(s) in the first scenario, we define measures that demonstrate the actual complexity of the problem and the actual efficiency of the solutions. The results of the analysis corroborate the performance and effectiveness of our approach to achieve a reasonable accuracy.Con respecto al crecimiento en la cantidad de información social, ambiental y biomédica disponible digitalmente, existe una creciente necesidad de sistemas de la búsqueda de la respuesta (QA) que puedan ofrecer a los usuarios la gestión de esta nueva cantidad de información. A pesar del progreso reciente en QA, la calidad de interpretación y extracción de la respuesta deseada no es la adecuada. Creemos que trabajar para lograr una mayor precisión en los sistemas de QA es todavía un campo de investigación abierto. Es decir, es mejor no tener respuestas que tener respuestas incorrectas. Sin embargo, existen diversas consultas que los sistemas de QA en el estado del arte no pueden interpretar ni responder adecuadamente. El problema de interpretar una pregunta de una manera que podría preservar su estructura sintáctica-semántica es considerado como uno de los desafíos más importantes en esta área. En este trabajo nos centramos en los problemas de los sistemas de QA basados en semántica y en el análisis de la efectividad de las técnicas de PNL, y la aplicación de consultas e inferencia respuesta tanto en dominios cerrados (primer escenario) como abiertos (segundo escenario). Para este propósito, la arquitectura del sistema de búsqueda de respuestas en dominios cerrados y abiertos basado en semántica (en adelante "ScoQAS") sobre ontologías se presenta con dos prototipos diferentes: en dominio cerrado basado en el uso de ontologías y un dominio abierto dirigido a repositorios de Linked Open Data (LOD). El ScoQAS se basa en técnicas de PNL que combinan patrones de características de estructura semánticas para la clasificación de preguntas y la creación de una estructura de información sintáctico-semántica de preguntas (QSiS). El QSiS proporciona una manera la construcción de restricciones para formular los términos relacionados en aspectos sintáctico-semánticos y generar un grafo de preguntas (QGraph) el cual facilita derivar inferencias para obtener una respuesta precisa en el dominio cerrado. Además, nuestro enfoque proporciona un método adecuado para aplicar la información integral formulada en la plantilla de consulta SPARQL para navegar en los recursos LOD en el dominio abierto. Las principales contribuciones de este trabajo son los siguientes: 1. El desarrollo de la arquitectura ScoQAS integrada con componentes comunes y específicos compatibles con ontologías de dominio cerrado y abierto. 2. El análisis de la pregunta del usuario y la construcción de una estructura de información sintáctico-semántica de las preguntas (QSiS), que está constituida por varios procesos de la metodología: clasificación de preguntas, determinación del Tipo de Respuesta Esperada (EAT) y las restricciones generadas. 3. La presentación de un patrón empírico basado en la estructura semántica para clasificar las preguntas y generalizar las restricciones heurísticas para formular las relaciones entre las características en el patrón reconocido en términos sintácticos y semánticos. 4. El desarrollo de un QGraph sintáctico-semántico para representar los componentes centrales de la pregunta. 5. La presentación de la respuesta inferida a partir de un grafo empírico en el dominio cerrado. En pocas palabras, se presenta un sistema semántico de QA que proporciona algunos resultados experimentales sobre los dominios cerrados y abiertos. La eficiencia del ScoQAS se evalúa utilizando medidas tales como una precisión, cobertura y la medida-F en desafíos LOD para el dominio abierto. Para el dominio cerrado, nos centramos en la evaluación cuantitativa; su precisión se analiza en una ontología empresarial. La falta de un banco la pruebas predefinidas es uno de los principales desafíos de la evaluación en el primer escenario. Por lo tanto, definimos medidas que demuestran la complejidad real del problema y la eficiencia real de las soluciones. Los resultados del análisis corroboran el rendimient
Selecting and Generating Computational Meaning Representations for Short Texts
Language conveys meaning, so natural language processing (NLP) requires representations of meaning. This work addresses two broad questions: (1) What meaning representation should we use? and (2) How can we transform text to our chosen meaning representation? In the first part, we explore different meaning representations (MRs) of short texts, ranging from surface forms to deep-learning-based models. We show the advantages and disadvantages of a variety of MRs for summarization, paraphrase detection, and clustering. In the second part, we use SQL as a running example for an in-depth look at how we can parse text into our chosen MR. We examine the text-to-SQL problem from three perspectives—methodology, systems, and applications—and show how each contributes to a fuller understanding of the task.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143967/1/cfdollak_1.pd
Complex question answering : minimizing the gaps and beyond
xi, 192 leaves : ill. ; 29 cmCurrent Question Answering (QA) systems have been significantly advanced in demonstrating
finer abilities to answer simple factoid and list questions. Such questions are easier
to process as they require small snippets of texts as the answers. However, there is
a category of questions that represents a more complex information need, which cannot
be satisfied easily by simply extracting a single entity or a single sentence. For example,
the question: “How was Japan affected by the earthquake?” suggests that the inquirer is
looking for information in the context of a wider perspective. We call these “complex questions”
and focus on the task of answering them with the intention to minimize the existing
gaps in the literature.
The major limitation of the available search and QA systems is that they lack a way of
measuring whether a user is satisfied with the information provided. This was our motivation
to propose a reinforcement learning formulation to the complex question answering
problem. Next, we presented an integer linear programming formulation where sentence
compression models were applied for the query-focused multi-document summarization
task in order to investigate if sentence compression improves the overall performance.
Both compression and summarization were considered as global optimization problems.
We also investigated the impact of syntactic and semantic information in a graph-based
random walk method for answering complex questions. Decomposing a complex question
into a series of simple questions and then reusing the techniques developed for answering
simple questions is an effective means of answering complex questions. We proposed a
supervised approach for automatically learning good decompositions of complex questions
in this work. A complex question often asks about a topic of user’s interest. Therefore, the
problem of complex question decomposition closely relates to the problem of topic to question
generation. We addressed this challenge and proposed a topic to question generation
approach to enhance the scope of our problem domain
- …