162 research outputs found
On the voice-activated question answering
[EN] Question answering (QA) is probably one of the most challenging tasks in the field of natural language processing. It requires search engines that are capable of extracting concise, precise fragments of text that contain an answer to a question posed by the user. The incorporation of voice interfaces to the QA systems adds a more natural and very appealing perspective for these systems. This paper provides a comprehensive description of current state-of-the-art voice-activated QA systems. Finally, the scenarios that will emerge from the introduction of speech recognition in QA will be discussed. © 2006 IEEE.This work was supported in part by Research Projects TIN2009-13391-C04-03 and TIN2008-06856-C05-02. This paper was recommended by Associate Editor V. Marik.Rosso, P.; Hurtado Oliver, LF.; Segarra Soriano, E.; Sanchís Arnal, E. (2012). On the voice-activated question answering. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews. 42(1):75-85. https://doi.org/10.1109/TSMCC.2010.2089620S758542
Conversations with Documents. An Exploration of Document-Centered Assistance
The role of conversational assistants has become more prevalent in helping
people increase their productivity. Document-centered assistance, for example
to help an individual quickly review a document, has seen less significant
progress, even though it has the potential to tremendously increase a user's
productivity. This type of document-centered assistance is the focus of this
paper. Our contributions are three-fold: (1) We first present a survey to
understand the space of document-centered assistance and the capabilities
people expect in this scenario. (2) We investigate the types of queries that
users will pose while seeking assistance with documents, and show that
document-centered questions form the majority of these queries. (3) We present
a set of initial machine learned models that show that (a) we can accurately
detect document-centered questions, and (b) we can build reasonably accurate
models for answering such questions. These positive results are encouraging,
and suggest that even greater results may be attained with continued study of
this interesting and novel problem space. Our findings have implications for
the design of intelligent systems to support task completion via natural
interactions with documents.Comment: Accepted as full paper at CHIIR 2020; 9 pages + Appendi
Combining information seeking services into a meta supply chain of facts
The World Wide Web has become a vital supplier of information that allows organizations to carry on such tasks as business intelligence, security monitoring, and risk assessments. Having a quick and reliable supply of correct facts from perspective is often mission critical. By following design science guidelines, we have explored ways to recombine facts from multiple sources, each with possibly different levels of responsiveness and accuracy, into one robust supply chain. Inspired by prior research on keyword-based meta-search engines (e.g., metacrawler.com), we have adapted the existing question answering algorithms for the task of analysis and triangulation of facts. We present a first prototype for a meta approach to fact seeking. Our meta engine sends a user's question to several fact seeking services that are publicly available on the Web (e.g., ask.com, brainboost.com, answerbus.com, NSIR, etc.) and analyzes the returned results jointly to identify and present to the user those that are most likely to be factually correct. The results of our evaluation on the standard test sets widely used in prior research support the evidence for the following: 1) the value-added of the meta approach: its performance surpasses the performance of each supplier, 2) the importance of using fact seeking services as suppliers to the meta engine rather than keyword driven search portals, and 3) the resilience of the meta approach: eliminating a single service does not noticeably impact the overall performance. We show that these properties make the meta-approach a more reliable supplier of facts than any of the currently available stand-alone services
Factoid question answering for spoken documents
In this dissertation, we present a factoid question answering system, specifically tailored for Question Answering (QA) on spoken documents.
This work explores, for the first time, which techniques can be robustly adapted from the usual QA on written documents to the more difficult spoken documents scenario. More specifically, we study new information retrieval (IR) techniques designed for speech, and utilize several levels of linguistic information for the speech-based QA task. These include named-entity detection with phonetic information, syntactic parsing applied to speech transcripts, and the use of coreference resolution.
Our approach is largely based on supervised machine learning techniques, with special focus on the answer extraction step, and makes little use of handcrafted knowledge. Consequently, it should be easily adaptable to other domains and languages.
In the work resulting of this Thesis, we have impulsed and coordinated the creation of an evaluation framework for the task of QA on spoken documents. The framework, named QAst, provides multi-lingual corpora, evaluation questions, and answers key. These corpora have been used in the QAst evaluation that was held in the CLEF workshop for the years 2007, 2008 and 2009, thus helping the developing of state-of-the-art techniques for this particular topic.
The presentend QA system and all its modules are extensively evaluated on the European Parliament Plenary Sessions
English corpus composed of manual transcripts and automatic transcripts obtained by three different Automatic Speech Recognition (ASR) systems that exhibit significantly different word error rates. This data belongs to the CLEF 2009 track for QA on speech transcripts.
The main results confirm that syntactic information is very useful for learning to rank question candidates, improving results on both manual and automatic transcripts unless the ASR quality is very low. Overall, the performance of our system is comparable or better than the state-of-the-art on this corpus, confirming the validity of our approach.En aquesta Tesi, presentem un sistema de Question Answering (QA) factual, especialment ajustat per treballar amb documents orals.
En el desenvolupament explorem, per primera vegada, quines tècniques de les habitualment emprades en QA per documents escrit són suficientment robustes per funcionar en l'escenari més difícil de documents orals. Amb més especificitat, estudiem nous mètodes de Information Retrieval (IR) dissenyats per tractar amb la veu, i utilitzem diversos nivells d'informació linqüística. Entre aquests s'inclouen, a saber: detecció de Named Entities utilitzant informació fonètica, "parsing" sintàctic aplicat a transcripcions de veu, i també l'ús d'un sub-sistema de detecció i resolució de la correferència.
La nostra aproximació al problema es recolza en gran part en tècniques supervisades de Machine Learning, estant aquestes enfocades especialment cap a la part d'extracció de la resposta, i fa servir la menor quantitat possible de coneixement creat per humans. En conseqüència, tot el procés de QA pot ser adaptat a altres dominis o altres llengües amb relativa facilitat.
Un dels resultats addicionals de la feina darrere d'aquesta Tesis ha estat que hem impulsat i coordinat la creació d'un marc d'avaluació de la taska de QA en documents orals. Aquest marc de treball, anomenat QAst (Question Answering on Speech Transcripts), proporciona un corpus de documents orals multi-lingüe, uns conjunts de preguntes d'avaluació, i les respostes correctes d'aquestes. Aquestes dades han estat utilitzades en les evaluacionis QAst que han tingut lloc en el si de les conferències CLEF en els anys 2007, 2008 i 2009; d'aquesta manera s'ha promogut i ajudat a la creació d'un estat-de-l'art de tècniques adreçades a aquest problema en particular.
El sistema de QA que presentem i tots els seus particulars sumbòduls, han estat avaluats extensivament utilitzant el corpus EPPS (transcripcions de les Sessions Plenaries del Parlament Europeu) en anglès, que cónté transcripcions manuals de tots els discursos i també transcripcions automàtiques obtingudes mitjançant tres reconeixedors automàtics de la parla (ASR) diferents. Els reconeixedors tenen característiques i resultats diferents que permetes una avaluació quantitativa i qualitativa de la tasca. Aquestes dades pertanyen a l'avaluació QAst del 2009.
Els resultats principals de la nostra feina confirmen que la informació sintàctica és mol útil per aprendre automàticament a valorar la plausibilitat de les respostes candidates, millorant els resultats previs tan en transcripcions manuals com transcripcions automàtiques, descomptat que la qualitat de l'ASR sigui molt baixa. En general, el rendiment del nostre sistema és comparable o millor que els altres sistemes pertanyents a l'estat-del'art, confirmant així la validesa de la nostra aproximació
Recommended from our members
Response Retrieval in Information-seeking Conversations
The increasing popularity of mobile Internet has led to several crucial changes in the way that people use search engines compared with traditional Web search on desktops. On one hand, there is limited output bandwidth with the small screen sizes of most mobile devices. Mobile Internet users prefer direct answers on the search engine result page (SERP). On the other hand, voice-based / text-based conversational interfaces are becoming increasing popular as shown in the wide adoption of intelligent assistant services and devices such as Amazon Echo, Microsoft Cortana and Google Assistant around the world. These important changes have triggered several new challenges that search engines have had to adapt to in order to better satisfy the information needs of mobile Internet users. In this dissertation, we investigate several aspects of single-turn answer retrieval and multi-turn information-seeking conversations to handle the new challenges of search on the mobile Internet.
We start from the research on single-turn answer retrieval and analyze the weaknesses of existing deep learning architectures for answer ranking. Then we propose an attention based neural matching model with a value-shared weighting scheme and attention mechanism to improve existing deep neural answer ranking models. Our proposed model achieves state-of-the-art performance for answer sentence retrieval compared with both feature engineering based methods and other neural models.
Then we move on to study response retrieval in multi-turn information-seeking conversations beyond single-turn interactions. Much research on response selection in conversation systems is modeling the matching patterns between user input message (either with context or not) and response candidates, which ignores external knowledge beyond the dialog utterances. We propose a learning framework on top of deep neural matching networks that leverages external knowledge with pseudo-relevance feedback and QA correspondence knowledge distillation for response retrieval. We also study how to integrate user intent modeling into neural ranking models to improve response retrieval performance. Finally, hybrid models of response retrieval and generation are investigated in order to combine the merits of these two different paradigms of conversation models.
Our goal is to develop effective learning models for answer retrieval and information-seeking conversations, in order to improve the effectiveness and user experience when accessing information with a touch screen interface or a conversational interface, as commonly adopted by millions of mobile Internet devices
Contextual question answering for the health domain
Studies have shown that natural language interfaces such as question answering and conversational systems allow information to be accessed and understood more easily by users who are unfamiliar with the nuances of the delivery mechanisms (e.g., keyword-based search engines) or have limited literacy in certain domains (e.g., unable to comprehend health-related content due to terminology barrier). In particular, the increasing use of the web for health information prompts us to reexamine our existing delivery mechanisms. We present enquireMe, which is a contextual question answering system that provides lay users with the ability to obtain responses about a wide range of health topics by vaguely expressing at the start and gradually refining their information needs over the course of an interaction session using natural language. enquireMe allows the users to engage in 'conversations' about their health concerns, a process that can be therapeutic in itself. The system uses community-driven question-answer pairs from the web together with a decay model to deliver the top scoring answers as responses to the users' unrestricted inputs. We evaluated enquireMe using benchmark data from WebMD and TREC to assess the accuracy of system-generated answers. Despite the absence of complex knowledge acquisition and deep language processing, enquireMe is comparable to the state-of-the-art question answering systems such as START as well as those interactive systems from TREC
RealText-asg: A Model to Present Answers Utilizing the Linguistic Structure of Source Question
Recent trends in Question Answering (QA) have led to numerous studies focusing on pre-senting answers in a form which closely re-sembles a human generated answer. These studies have used a range of techniques which use the structure of knowledge, generic lin-guistic structures and template based ap-proaches to construct answers as close as pos-sible to a human generate answer, referred to as human competitive answers. This paper re-ports the results of an empirical study which uses the linguistic structure of the source ques-tion as the basis for a human competitive answer. We propose a typed dependency based approach to generate an answer sen-tence where linguistic structure of the ques-tion is transformed and realized into a sen-tence containing the answer. We employ the factoid questions from QALD-2 training ques-tion set to extract typed dependency patterns based on the root of the parse tree. Using iden-tified patterns we generate a rule set which is used to generate a natural language sentence containing the answer extracted from a knowl-edge source, realized into a linguistically cor-rect sentence. The evaluation of the approach is performed using QALD-2 testing factoid questions sets with a 78.84 % accuracy. The top-10 patterns extracted from training dataset were able to cover 69.19 % of test questions.
Advanced techniques for personalized, interactive question answering
Using a computer to answer questions has been a human dream since the beginning of
the digital era. A first step towards the achievement of such an ambitious goal is to deal
with naturallangilage to enable the computer to understand what its user asks.
The discipline that studies the conD:ection between natural language and the represen~
tation of its meaning via computational models is computational linguistics. According
to such discipline, Question Answering can be defined as the task that, given a question
formulated in natural language, aims at finding one or more concise answers in the form
of sentences or phrases.
Question Answering can be interpreted as a sub-discipline of information retrieval
with the added challenge of applying sophisticated techniques to identify the complex
syntactic and semantic relationships present in text. Although it is widely accepted that
Question Answering represents a step beyond standard infomiation retrieval, allowing a
more sophisticated and satisfactory response to the user's information needs, it still shares
a series of unsolved issues with the latter.
First, in most state-of-the-art Question Answering systems, the results are created
independently of the questioner's characteristics, goals and needs. This is a serious limitation
in several cases: for instance, a primary school child and a History student may
need different answers to the questlon: When did, the Middle Ages begin?
Moreover, users often issue queries not as standalone but in the context of a wider
information need, for instance when researching a specific topic. Although it has recently been proposed that providing Question Answering systems with dialogue interfaces
would encourage and accommodate the submission of multiple related questions
and handle the user's requests for clarification, interactive Question Answering is still at
its early stages:
Furthermore, an i~sue which still remains open in current Question Answering is
that of efficiently answering complex questions, such as those invoking definitions and
descriptions (e.g. What is a metaphor?). Indeed, it is difficult to design criteria to assess
the correctness of answers to such complex questions.
.. These are the central research problems addressed by this thesis, and are solved as
follows.
An in-depth study on complex Question Answering led to the development of classifiers
for complex answers. These exploit a variety of lexical, syntactic and shallow
semantic features to perform textual classification using tree-~ernel functions for Support
Vector Machines.
The issue of personalization is solved by the integration of a User Modelling corn':
ponent within the the Question Answering model. The User Model is able to filter and
fe-rank results based on the user's reading level and interests.
The issue ofinteractivity is approached by the development of a dialogue model and a
dialogue manager suitable for open-domain interactive Question Answering. The utility
of such model is corroborated by the integration of an interactive interface to allow reference
resolution and follow-up conversation into the core Question Answerin,g system and
by its evaluation.
Finally, the models of personalized and interactive Question Answering are integrated
in a comprehensive framework forming a unified model for future Question Answering
research
Comparative web search questions
We analyze comparative questions, i.e., questions asking to compare different items, that were submitted to Yandex in 2012. Responses to such questions might be quite different from the simple “ten blue links” and could, for example, aggregate pros and cons of the different options as direct answers. However, changing the result presentation is an intricate decision such that the classification of comparative questions forms a highly precision-oriented task. From a year-long Yandex log, we annotate a random sample of 50,000 questions; 2.8% of which are comparative. For these annotated questions, we develop a precision-oriented classifier by combining carefully hand-crafted lexico-syntactic rules with feature-based and neural approaches—achieving a recall of 0.6 at a perfect precision of 1.0. After running the classifier on the full year log (on average, there is at least one comparative question per second), we analyze 6,250 comparative questions using more fine-grained subclasses (e.g., should the answer be a “simple” fact or rather a more verbose argument) for which individual classifiers are trained. An important insight is that more than 65% of the comparative questions demand argumentation and opinions, i.e., reliable direct answers to comparative questions require more than the facts from a search engine’s knowledge graph. In addition, we present a qualitative analysis of the underlying comparative information needs (separated into 14 categories like consumer electronics or health), their seasonal dynamics, and possible answers from community question answering platforms. © 2020 Copyright held by the owner/author(s).This work has been partially supported by the DFG through the project “ACQuA: Answering Comparative Questions with Arguments” (grants BI 1544/7-1 and HA 5851/2-1) as part of the priority program “RATIO: Robust Argumentation Machines” (SPP 1999). We thank Yandex and Mail.Ru for granting access to the data. The study was partially conducted during Pavel Braslavski’s research stay at the Bauhaus-Universität Weimar in 2018 supported by the DAAD. We also thank Ekaterina Shirshakova and Valentin Dittmar for their help in question annotation
- …