15 research outputs found

    Follow-up question handling in the IMIX and Ritel systems: A comparative study

    Get PDF
    One of the basic topics of question answering (QA) dialogue systems is how follow-up questions should be interpreted by a QA system. In this paper, we shall discuss our experience with the IMIX and Ritel systems, for both of which a follow-up question handling scheme has been developed, and corpora have been collected. These two systems are each other's opposites in many respects: IMIX is multimodal, non-factoid, black-box QA, while Ritel is speech, factoid, keyword-based QA. Nevertheless, we will show that they are quite comparable, and that it is fruitful to examine the similarities and differences. We shall look at how the systems are composed, and how real, non-expert, users interact with the systems. We shall also provide comparisons with systems from the literature where possible, and indicate where open issues lie and in what areas existing systems may be improved. We conclude that most systems have a common architecture with a set of common subtasks, in particular detecting follow-up questions and finding referents for them. We characterise these tasks using the typical techniques used for performing them, and data from our corpora. We also identify a special type of follow-up question, the discourse question, which is asked when the user is trying to understand an answer, and propose some basic methods for handling it

    Survey on Evaluation Methods for Dialogue Systems

    Get PDF
    In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class

    Survey on evaluation methods for dialogue

    Get PDF
    In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class

    Advanced techniques for personalized, interactive question answering

    Get PDF
    Using a computer to answer questions has been a human dream since the beginning of the digital era. A first step towards the achievement of such an ambitious goal is to deal with naturallangilage to enable the computer to understand what its user asks. The discipline that studies the conD:ection between natural language and the represen~ tation of its meaning via computational models is computational linguistics. According to such discipline, Question Answering can be defined as the task that, given a question formulated in natural language, aims at finding one or more concise answers in the form of sentences or phrases. Question Answering can be interpreted as a sub-discipline of information retrieval with the added challenge of applying sophisticated techniques to identify the complex syntactic and semantic relationships present in text. Although it is widely accepted that Question Answering represents a step beyond standard infomiation retrieval, allowing a more sophisticated and satisfactory response to the user's information needs, it still shares a series of unsolved issues with the latter. First, in most state-of-the-art Question Answering systems, the results are created independently of the questioner's characteristics, goals and needs. This is a serious limitation in several cases: for instance, a primary school child and a History student may need different answers to the questlon: When did, the Middle Ages begin? Moreover, users often issue queries not as standalone but in the context of a wider information need, for instance when researching a specific topic. Although it has recently been proposed that providing Question Answering systems with dialogue interfaces would encourage and accommodate the submission of multiple related questions and handle the user's requests for clarification, interactive Question Answering is still at its early stages: Furthermore, an i~sue which still remains open in current Question Answering is that of efficiently answering complex questions, such as those invoking definitions and descriptions (e.g. What is a metaphor?). Indeed, it is difficult to design criteria to assess the correctness of answers to such complex questions. .. These are the central research problems addressed by this thesis, and are solved as follows. An in-depth study on complex Question Answering led to the development of classifiers for complex answers. These exploit a variety of lexical, syntactic and shallow semantic features to perform textual classification using tree-~ernel functions for Support Vector Machines. The issue of personalization is solved by the integration of a User Modelling corn': ponent within the the Question Answering model. The User Model is able to filter and fe-rank results based on the user's reading level and interests. The issue ofinteractivity is approached by the development of a dialogue model and a dialogue manager suitable for open-domain interactive Question Answering. The utility of such model is corroborated by the integration of an interactive interface to allow reference resolution and follow-up conversation into the core Question Answerin,g system and by its evaluation. Finally, the models of personalized and interactive Question Answering are integrated in a comprehensive framework forming a unified model for future Question Answering research

    Robust Dialog Management Through A Context-centric Architecture

    Get PDF
    This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users

    Factoid question answering for spoken documents

    Get PDF
    In this dissertation, we present a factoid question answering system, specifically tailored for Question Answering (QA) on spoken documents. This work explores, for the first time, which techniques can be robustly adapted from the usual QA on written documents to the more difficult spoken documents scenario. More specifically, we study new information retrieval (IR) techniques designed for speech, and utilize several levels of linguistic information for the speech-based QA task. These include named-entity detection with phonetic information, syntactic parsing applied to speech transcripts, and the use of coreference resolution. Our approach is largely based on supervised machine learning techniques, with special focus on the answer extraction step, and makes little use of handcrafted knowledge. Consequently, it should be easily adaptable to other domains and languages. In the work resulting of this Thesis, we have impulsed and coordinated the creation of an evaluation framework for the task of QA on spoken documents. The framework, named QAst, provides multi-lingual corpora, evaluation questions, and answers key. These corpora have been used in the QAst evaluation that was held in the CLEF workshop for the years 2007, 2008 and 2009, thus helping the developing of state-of-the-art techniques for this particular topic. The presentend QA system and all its modules are extensively evaluated on the European Parliament Plenary Sessions English corpus composed of manual transcripts and automatic transcripts obtained by three different Automatic Speech Recognition (ASR) systems that exhibit significantly different word error rates. This data belongs to the CLEF 2009 track for QA on speech transcripts. The main results confirm that syntactic information is very useful for learning to rank question candidates, improving results on both manual and automatic transcripts unless the ASR quality is very low. Overall, the performance of our system is comparable or better than the state-of-the-art on this corpus, confirming the validity of our approach.En aquesta Tesi, presentem un sistema de Question Answering (QA) factual, especialment ajustat per treballar amb documents orals. En el desenvolupament explorem, per primera vegada, quines tècniques de les habitualment emprades en QA per documents escrit són suficientment robustes per funcionar en l'escenari més difícil de documents orals. Amb més especificitat, estudiem nous mètodes de Information Retrieval (IR) dissenyats per tractar amb la veu, i utilitzem diversos nivells d'informació linqüística. Entre aquests s'inclouen, a saber: detecció de Named Entities utilitzant informació fonètica, "parsing" sintàctic aplicat a transcripcions de veu, i també l'ús d'un sub-sistema de detecció i resolució de la correferència. La nostra aproximació al problema es recolza en gran part en tècniques supervisades de Machine Learning, estant aquestes enfocades especialment cap a la part d'extracció de la resposta, i fa servir la menor quantitat possible de coneixement creat per humans. En conseqüència, tot el procés de QA pot ser adaptat a altres dominis o altres llengües amb relativa facilitat. Un dels resultats addicionals de la feina darrere d'aquesta Tesis ha estat que hem impulsat i coordinat la creació d'un marc d'avaluació de la taska de QA en documents orals. Aquest marc de treball, anomenat QAst (Question Answering on Speech Transcripts), proporciona un corpus de documents orals multi-lingüe, uns conjunts de preguntes d'avaluació, i les respostes correctes d'aquestes. Aquestes dades han estat utilitzades en les evaluacionis QAst que han tingut lloc en el si de les conferències CLEF en els anys 2007, 2008 i 2009; d'aquesta manera s'ha promogut i ajudat a la creació d'un estat-de-l'art de tècniques adreçades a aquest problema en particular. El sistema de QA que presentem i tots els seus particulars sumbòduls, han estat avaluats extensivament utilitzant el corpus EPPS (transcripcions de les Sessions Plenaries del Parlament Europeu) en anglès, que cónté transcripcions manuals de tots els discursos i també transcripcions automàtiques obtingudes mitjançant tres reconeixedors automàtics de la parla (ASR) diferents. Els reconeixedors tenen característiques i resultats diferents que permetes una avaluació quantitativa i qualitativa de la tasca. Aquestes dades pertanyen a l'avaluació QAst del 2009. Els resultats principals de la nostra feina confirmen que la informació sintàctica és mol útil per aprendre automàticament a valorar la plausibilitat de les respostes candidates, millorant els resultats previs tan en transcripcions manuals com transcripcions automàtiques, descomptat que la qualitat de l'ASR sigui molt baixa. En general, el rendiment del nostre sistema és comparable o millor que els altres sistemes pertanyents a l'estat-del'art, confirmant així la validesa de la nostra aproximació

    Analyse et détection automatique de disfluences dans la parole spontanée conversationnelle

    Get PDF
    Extracting information from linguistic data has gain more and more attention in the last decades inrelation with the increasing amount of information that has to be processed on a daily basis in the world. Since the 90’s, this interest for information extraction has converged to the development of researches on speech data. In fact, speech data involves extra problems to those encountered on written data. In particular, due to many phenomena specific to human speech (e.g. hesitations, corrections, etc.). But also, because automatic speech recognition systems applied on speech signal potentially generates errors. Thus, extracting information from audio data requires to extract information by taking into account the "noise" inherent to audio data and output of automatic systems. Thus, extracting information from speech data cannot be as simple as a combination of methods that have proven themselves to solve the extraction information task on written data. It comes that, the use of technics dedicated for speech/audio data processing is mandatory, and epsecially technics which take into account the specificites of such data in relation with the corresponding signal and transcriptions (manual and automatic). This problem has given birth to a new area of research and raised new scientific challenges related to the management of the variability of speech and its spontaneous modes of expressions. Furthermore, robust analysis of phone conversations is subject to a large number of works this thesis is in the continuity.More specifically, this thesis focuses on edit disfluencies analysis and their realisation in conversational data from EDF call centres, using speech signal and both manual and automatic transcriptions. This work is linked to numerous domains, from robust analysis of speech data to analysis and management of aspects related to speech expression. The aim of the thesis is to propose appropriate methods to deal with speech data to improve text mining analyses of speech transcriptions (treatment of disfluencies). To address these issues, we have finely analysed the characteristic phenomena and behavior of spontaneous speech (disfluencies) in conversational data from EDF call centres and developed an automatic method for their detection using linguistic, prosodic, discursive and para-linguistic features.The contributions of this thesis are structured in three areas of research. First, we proposed a specification of call centre conversations from the prespective of the spontaneous speech and from the phenomena that specify it. Second, we developed (i) an enrichment chain and effective processings of speech data on several levels of analysis (linguistic, acoustic-prosodic, discursive and para-linguistic) ; (ii) an system which detect automaticcaly the edit disfluencies suitable for conversational data and based on the speech signal and transcriptions (manual or automatic). Third, from a "resource" point of view, we produced a corpus of automatic transcriptions of conversations taken from call centres which has been annotated in edition disfluencies (using a semi-automatic method).Extraire de l'information de données langagières est un sujet de plus en plus d'actualité compte tenude la quantité toujours croissante d'information qui doit être régulièrement traitée et analysée, etnous assistons depuis les années 90 à l'essor des recherches sur des données de parole également. Laparole pose des problèmes supplémentaires par rapport à l'écrit, notamment du fait de la présence dephénomènes propres à l'oral (hésitations, reprises, corrections) mais aussi parce que les donnéesorales sont traitées par un système de reconnaissance automatique de la parole qui génèrepotentiellement des erreurs. Ainsi, extraire de l'information de données audio implique d'extraire del'information tout en tenant compte du « bruit » intrinsèque à l'oral ou généré par le système dereconnaissance de la parole. Il ne peut donc s'agir d'une simple application de méthodes qui ont faitleurs preuves sur de l'écrit. L'utilisation de techniques adaptées au traitement des données issues del'oral et prenant en compte à la fois leurs spécificités liées au signal de parole et à la transcription –manuelle comme automatique – de ce dernier représente un thème de recherche en pleindéveloppement et qui soulève de nouveaux défis scientifiques. Ces défis sont liés à la gestion de lavariabilité dans la parole et des modes d'expressions spontanés. Par ailleurs, l'analyse robuste deconversations téléphoniques a également fait l'objet d'un certain nombre de travaux dans lacontinuité desquels s'inscrivent ces travaux de thèse.Cette thèse porte plus spécifiquement sur l'analyse des disfluences et de leur réalisation dans desdonnées conversationnelles issues des centres d'appels EDF, à partir du signal de parole et destranscriptions manuelle et automatique de ce dernier. Ce travail convoque différents domaines, del'analyse robuste de données issues de la parole à l'analyse et la gestion des aspects liés àl'expression orale. L'objectif de la thèse est de proposer des méthodes adaptées à ces données, quipermettent d'améliorer les analyses de fouille de texte réalisées sur les transcriptions (traitement desdisfluences). Pour répondre à ces problématiques, nous avons analysé finement le comportement dephénomènes caractéristiques de l'oral spontané (disfluences) dans des données oralesconversationnelles issues de centres d'appels EDF, et nous avons mis au point une méthodeautomatique pour leur détection, en utilisant des indices linguistiques, acoustico-prosodiques,discursifs et para-linguistiques.Les apports de cette thèse s'articulent donc selon trois axes de recherche. Premièrement, nousproposons une caractérisation des conversations en centres d'appels du point de vue de l'oralspontané et des phénomènes qui le caractérisent. Deuxièmement, nous avons mis au point (i) unechaîne d'enrichissement et de traitement des données orales effective sur plusieurs plans d'analyse(linguistique, prosodique, discursif, para-linguistique) ; (ii) un système de détection automatique desdisfluences d'édition adapté aux données orales conversationnelles, utilisant le signal et lestranscriptions (manuelles ou automatiques). Troisièmement, d'un point de vue « ressource », nousavons produit un corpus de transcriptions automatiques de conversations issues de centres d'appelsannoté en disfluences d'édition (méthode semi-automatique)

    Estrategias para el acceso a contenidos Web mediante habla

    Get PDF
    El objetivo de la tesis es diseñar y evaluar diferentes estrategias para el acceso a contenidos web empleando habla. El trabajo se ha centrado en la reutilización de los contenidos web existentes y en plantear una interacción hablada que resulte rápida y amigable. En la primera fase se ha analizado el problema de la conversión genérica de contenidos web para su acceso a través de un navegador vocal y se han propuesto dos alternativas: la conversión automática y la conversión semiautomática. En la segunda fase se ha planteado la utilización de un sistema de diálogo hablado para el acceso a contenidos web en dominios restringidos. La propuesta está basada en un modelo de información y en un modelo de interacción. En la tercera fase se han realizado diversos experimentos con un sistema de recuperación de información dirigida por habla, proponiendo varias mejoras que permiten incrementar el rendimiento del sistema.Departamento de Informátic
    corecore