101 research outputs found

    Selecting answers with structured lexical expansion and discourse relations: LIMSI's participation at QA4MRE 2013

    Get PDF
    International audiencen this paper, we present the LIMSI’s participation to QA4MRE2013. We decided to test two kinds of methods. The first one focuses on complex questions, such as causal questions, and exploits discourse relations. Relation recognition shows promising results, however it has to be improved to have an impact on answer selection. The second method is based on semantic variations. We explored the English Wiktionary to find reformulations of words in the definitions, and used these reformulations to index the documents and select passages in the Entrance exams task

    Ontology-Based Sentence Extraction for Answering Why-Question

    Get PDF
    Most studies on why-question answering system usually   used   the   keyword-based   approaches.   They   rarely involved domain ontology in capturing the semantic of the document contents, especially in detecting the presence of the causal relations. Consequently, the word mismatch problem usually  occurs  and  the  system  often  retrieves  not  relevant answers. For solving this problem, we propose an answer extraction method by involving the semantic similarity measure, with selective causality detection. The selective causality detection is  applied  because  not  all  sentences  belonging  to  an  answer contain  causality.  Moreover,   the   motivation  of  the  use  of semantic similarity measure in scoring function is to get more moderate results about the presence of the semantic annotations in a sentence, instead of 0/1. The semantic similarity measure employed is based on the shortest path and the maximum depth of the ontology graph. The evaluation is conducted by comparing the proposed method against the comparable ontology-based methods, i.e., the sentence extraction with Monge-Elkan with 0/1 internal similarity function. The proposed method shows the improvements in  term of  MRR (16%, 0.79-0.68), P@1  (15%, 0.76-0.66), P@5 (14%, 0.8-0.7), and Recall (19%, 0.86-0.72)

    Question Answering System : A Review On Question Analysis, Document Processing, And Answer Extraction Techniques

    Get PDF
    Question Answering System could automatically provide an answer to a question posed by human in natural languages. This system consists of question analysis, document processing, and answer extraction module. Question Analysis module has task to translate query into a form that can be processed by document processing module. Document processing is a technique for identifying candidate documents, containing answer relevant to the user query. Furthermore, answer extraction module receives the set of passages from document processing module, then determine the best answers to user. Challenge to optimize Question Answering framework is to increase the performance of all modules in the framework. The performance of all modules that has not been optimized has led to the less accurate answer from question answering systems. Based on this issues, the objective of this study is to review the current state of question analysis, document processing, and answer extraction techniques. Result from this study reveals the potential research issues, namely morphology analysis, question classification, and term weighting algorithm for question classification

    Sistem Question Answering Bahasa Indonesia Untuk Pertanyaan Non-factoid

    Full text link
    Fokus dari penelitian ini adalah untuk mengembangkan data dan sistem Question Answering (QA) Bahasa Indonesia untuk pertanyaan non-factoid. Penelitian ini merupakan penelitian QA non-factoid pertama untuk Bahasa Indonesia. Adapun sistem QA terdiri atas 3 komponen yaitu penganalisis pertanyaan, pengambil paragraf, dan pencari jawaban. Dalam komponen penganalisis pertanyaan, dengan asumsi bahwa pertanyaan yang diajukan merupakan pertanyaan sederhana, digunakan sistem yang berbasis aturan sederhana dengan mengandalkan kata pertanyaan yang digunakan (“apaâ€, “mengapaâ€, dan “bagaimanaâ€). Paragraf diperoleh dengan menggunakan pencarian kata kunci baik dengan menggunakan stemming ataupun tidak. Untuk pencari jawaban, jawaban diperoleh dengan menggunakan pola kata-kata khusus yang ditetapkan sebelumnya untuk setiap jenis pertanyaan. Dalam komponen pencari jawaban ini, diperoleh kesimpulan bahwa penggunaan kata kunci non-stemmed bersamaan dengan kata kunci hasil stemming memberikan nilai akurasi jawaban yang lebih baik, jika dibandingkan dengan penggunaan kata kunci non-stemmed saja atau kata kunci stem saja. Dengan menggunakan 90 pertanyaan yang dikumpulkan dari 10 orang Indonesia dan 61 dokumen sumber, diperoleh nilai MRR 0.7689, 0.5925, dan 0.5704 untuk tipe pertanyaan definisi, alasan, dan metode secara berurutan. Focus of this research is to develop QA data and system in Bahasa Indonesia for non-factoid questions. This research is the first non-factoid QA for Bahasa Indonesia. QA system consists of three components: question analyzer, paragraph taker, and answer seeker. In the component of question analyzer, by assuming that the question posed is a simple question, we used a simple rule-based system by relying on the question word used (“whatâ€, “whyâ€, and “howâ€). On the components of paragraph taker, the paragraph is obtained by using keyword, either by using stemming or not. For answer seeker, the answers obtained by using specific word patterns that previously defined for each type of question. In the component of answer seeker, the conclusion is the use of non-stemmed keywords in conjunction with the keyword stemming results give a better answer accuracy compared to non-use of the keyword or keywords are stemmed stem only. By using 90 questions, we collected from 10 people of Indonesia and the 61 source documents, obtained MRR values 0.7689, 0.5925, and 0.5704 for type definition question, reason, and methods respectively

    Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension

    Full text link
    This study considers the task of machine reading at scale (MRS) wherein, given a question, a system first performs the information retrieval (IR) task of finding relevant passages in a knowledge source and then carries out the reading comprehension (RC) task of extracting an answer span from the passages. Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages. In this paper, we propose a simple and effective approach that incorporates the IR and RC tasks by using supervised multi-task learning in order that the IR component can be trained by considering answer spans. Experimental results on the standard benchmark, answering SQuAD questions using the full Wikipedia as the knowledge source, showed that our model achieved state-of-the-art performance. Moreover, we thoroughly evaluated the individual contributions of our model components with our new Japanese dataset and SQuAD. The results showed significant improvements in the IR task and provided a new perspective on IR for RC: it is effective to teach which part of the passage answers the question rather than to give only a relevance score to the whole passage.Comment: 10 pages, 6 figure. Accepted as a full paper at CIKM 201

    Factoid question answering for spoken documents

    Get PDF
    In this dissertation, we present a factoid question answering system, specifically tailored for Question Answering (QA) on spoken documents. This work explores, for the first time, which techniques can be robustly adapted from the usual QA on written documents to the more difficult spoken documents scenario. More specifically, we study new information retrieval (IR) techniques designed for speech, and utilize several levels of linguistic information for the speech-based QA task. These include named-entity detection with phonetic information, syntactic parsing applied to speech transcripts, and the use of coreference resolution. Our approach is largely based on supervised machine learning techniques, with special focus on the answer extraction step, and makes little use of handcrafted knowledge. Consequently, it should be easily adaptable to other domains and languages. In the work resulting of this Thesis, we have impulsed and coordinated the creation of an evaluation framework for the task of QA on spoken documents. The framework, named QAst, provides multi-lingual corpora, evaluation questions, and answers key. These corpora have been used in the QAst evaluation that was held in the CLEF workshop for the years 2007, 2008 and 2009, thus helping the developing of state-of-the-art techniques for this particular topic. The presentend QA system and all its modules are extensively evaluated on the European Parliament Plenary Sessions English corpus composed of manual transcripts and automatic transcripts obtained by three different Automatic Speech Recognition (ASR) systems that exhibit significantly different word error rates. This data belongs to the CLEF 2009 track for QA on speech transcripts. The main results confirm that syntactic information is very useful for learning to rank question candidates, improving results on both manual and automatic transcripts unless the ASR quality is very low. Overall, the performance of our system is comparable or better than the state-of-the-art on this corpus, confirming the validity of our approach.En aquesta Tesi, presentem un sistema de Question Answering (QA) factual, especialment ajustat per treballar amb documents orals. En el desenvolupament explorem, per primera vegada, quines tècniques de les habitualment emprades en QA per documents escrit són suficientment robustes per funcionar en l'escenari més difícil de documents orals. Amb més especificitat, estudiem nous mètodes de Information Retrieval (IR) dissenyats per tractar amb la veu, i utilitzem diversos nivells d'informació linqüística. Entre aquests s'inclouen, a saber: detecció de Named Entities utilitzant informació fonètica, "parsing" sintàctic aplicat a transcripcions de veu, i també l'ús d'un sub-sistema de detecció i resolució de la correferència. La nostra aproximació al problema es recolza en gran part en tècniques supervisades de Machine Learning, estant aquestes enfocades especialment cap a la part d'extracció de la resposta, i fa servir la menor quantitat possible de coneixement creat per humans. En conseqüència, tot el procés de QA pot ser adaptat a altres dominis o altres llengües amb relativa facilitat. Un dels resultats addicionals de la feina darrere d'aquesta Tesis ha estat que hem impulsat i coordinat la creació d'un marc d'avaluació de la taska de QA en documents orals. Aquest marc de treball, anomenat QAst (Question Answering on Speech Transcripts), proporciona un corpus de documents orals multi-lingüe, uns conjunts de preguntes d'avaluació, i les respostes correctes d'aquestes. Aquestes dades han estat utilitzades en les evaluacionis QAst que han tingut lloc en el si de les conferències CLEF en els anys 2007, 2008 i 2009; d'aquesta manera s'ha promogut i ajudat a la creació d'un estat-de-l'art de tècniques adreçades a aquest problema en particular. El sistema de QA que presentem i tots els seus particulars sumbòduls, han estat avaluats extensivament utilitzant el corpus EPPS (transcripcions de les Sessions Plenaries del Parlament Europeu) en anglès, que cónté transcripcions manuals de tots els discursos i també transcripcions automàtiques obtingudes mitjançant tres reconeixedors automàtics de la parla (ASR) diferents. Els reconeixedors tenen característiques i resultats diferents que permetes una avaluació quantitativa i qualitativa de la tasca. Aquestes dades pertanyen a l'avaluació QAst del 2009. Els resultats principals de la nostra feina confirmen que la informació sintàctica és mol útil per aprendre automàticament a valorar la plausibilitat de les respostes candidates, millorant els resultats previs tan en transcripcions manuals com transcripcions automàtiques, descomptat que la qualitat de l'ASR sigui molt baixa. En general, el rendiment del nostre sistema és comparable o millor que els altres sistemes pertanyents a l'estat-del'art, confirmant així la validesa de la nostra aproximació

    A comparative approach to Question Answering Systems

    Get PDF
    In this paper I will analyze three different algorithms and approaches to implement Question Answering Systems (QA-Systems). I will analyze the efficiency, strengths, and weaknesses of multiple algorithms by explaining them in detail and comparing them with each other. The overarching aim of this thesis is to explore ideas that can be used to create a truly open context QA-System. Open context QA-Systems remain an open problem. The various algorithms and approaches presented in this work will be focused on complex questions. Complex questions are usually verbose and the context of the question is equally important to answer the query as is the question itself. Such questions represent an interesting problem in the field because they can be answered and written in a number of distinct ways. Also, the answer structure is not always the same and the QA-System needs to compensate for this. The analysis of complex questions differ between contexts. Contexts are the topic to which a complex question belongs, e.g. Biology, literature, etc… The analysis of the answer also differs according to the corpus used. Corpus is a set of documents, belonging to a specific context, where we can find the answer to a specified question. I will start by explaining various algorithms and approaches. I will then analyze its different parts. Finally, I will present some ideas on how to implement QA-Systems

    Answer Re-ranking with bilingual LDA and social QA forum corpus

    Get PDF
    One of the most important tasks for AI is to find valuable information from the Web. In this research, we develop a question answering system for retrieving answers based on a topic model, bilingual latent Dirichlet allocation (Bi-LDA), and knowledge from social question answering (SQA) forum, such as Yahoo! Answers. Regarding question and answer pairs from a SQA forum as a bilingual corpus, a shared topic over question and answer documents is assigned to each term so that the answer re-ranking system can infer the correlation of terms between questions and answers. A query expansion approach based on the topic model obtains a 9% higher top-150 mean reciprocal rank (MRR@150) and a 16% better geometric mean rank as compared to a simple matching system via Okapi/BM25. In addition, this thesis compares the performance in several experimental settings to clarify the factor of the result
    corecore