24 research outputs found

    Web Question Answering Systems: Measuring Task Performance

    Get PDF

    Open- vs. Restricted-Domain QA Systems in the Biomedical Field

    Get PDF
    Question Answering Systems (hereinafter QA systems) stand as a new alternative for Information Retrieval Systems. We conducted a study to evaluate the efficiency of QA systems as terminological sources for physicians, specialized translators, and users in general. To this end we analysed the performance of two open-domain and two restricted domain QA systems. The research entailed a collection of one hundred fifty definitional questions from WebMed. We studied the sources that QA systems used to retrieve the answers, and later applied a range of evaluation measures to mark the quality of answers. Through analysing the results obtained by asking the 150 questions in the QA systems MedQA, START, QuALiM and HONqa, it was possible to evaluate the systems’ operation through applying specific metrics. Despite the limitations demonstrated by these systems, as they are not accessible to everyone and they are not always completely developed, it has been confirmed that these four QA systems are valid and useful for obtaining definitional medical information in that they offer coherent and precise answers. The results are encouraging because they present this type of tool as a new possibility for gathering precise, reliable and specific information in a short period of time

    Open- vs. Restricted-Domain QA Systems in the Biomedical Field

    Get PDF
    Question Answering Systems (hereinafter QA systems) stand as a new alternative for Information Retrieval Systems. We conducted a study to evaluate the efficiency of QA systems as terminological sources for physicians, specialized translators, and users in general. To this end we analysed the performance of two open-domain and two restricted domain QA systems. The research entailed a collection of one hundred fifty definitional questions from WebMed. We studied the sources that QA systems used to retrieve the answers, and later applied a range of evaluation measures to mark the quality of answers. Through analysing the results obtained by asking the 150 questions in the QA systems MedQA, START, QuALiM and HONqa, it was possible to evaluate the systems’ operation through applying specific metrics. Despite the limitations demonstrated by these systems, as they are not accessible to everyone and they are not always completely developed, it has been confirmed that these four QA systems are valid and useful for obtaining definitional medical information in that they offer coherent and precise answers. The results are encouraging because they present this type of tool as a new possibility for gathering precise, reliable and specific information in a short period of time

    Szintaktikailag elemzett birtokos kifejezések algoritmizált fordítása adott formális nyelvre

    Get PDF
    Számos nemzetközi szakirodalom [5; 7; 10; 17; 20] foglakozott a birtokos szerkezetek szemantikai modellezésével, szemantikai sajátosságainak bemutatásával, azonban az eddig megalkotott modellek valamely konkrét birtokos szerkezetnek pontosan megfelelő formális mondat automatizált előállítását nem biztosítják. A cikkben megmutatjuk, hogyan lehet a problémát általános formában megoldani, illetve megmutatjuk, hogy az algoritmussal támogatott feldolgozásnak hol vannak a korlátai, melyek a még megoldandó feladatok

    An Analyst's Assistant for the Interpretation of Vehicle Track Data

    Get PDF
    This report describes the Analyst's Assistant, a software system for language-interactive, collaborative user-system interpretation of events, specifically targeting vehicle events that can be recognized on the basis of vehicle track data. The Analyst's Assistant utilizes language not only as a means of interaction, but also as a basis for internal representation of scene information, background knowledge, and results of interpretation. Building on this basis, the system demonstrates emerging intelligent systems techniques related to event recognition, summarization of events, partitioning of subtasks between user and system, and handling of language and graphical references to scene entities during interactive analysis

    How the brain represents language and answers questions? Using an AI system to understand the underlying neurobiological mechanisms

    Get PDF
    To understand the computations that underlie high-level cognitive processes we propose a framework of mechanisms that could in principle implement START, an AI program that answers questions using natural language. START organizes a sentence into a series of triplets, each containing three elements (subject, verb, object). We propose that the brain similarly defines triplets and then chunks the three elements into a spatial pattern. A complete sentence can be represented using up to 7 triplets in a working memory buffer organized by theta and gamma oscillations. This buffer can transfer information into long-term memory networks where a second chunking operation converts the serial triplets into a single spatial pattern in a network, with each triplet (with corresponding elements) represented in specialized subregions. The triplets that define a sentence become synaptically linked, thereby encoding the sentence in synaptic weights. When a question is posed, there is a search for the closest stored memory (having the greatest number of shared triplets). We have devised a search process that does not require that the question and the stored memory have the same number of triplets or have triplets in the same order. Once the most similar memory is recalled and undergoes 2-level dechunking, the sought for information can be obtained by element-by-element comparison of the key triplet in the question to the corresponding triplet in the retrieved memory. This search may require a reordering to align corresponding triplets, the use of pointers that link different triplets, or the use of semantic memory. Our framework uses 12 network processes; existing models can implement many of these, but in other cases we can only suggest neural implementations. Overall, our scheme provides the first view of how language-based question answering could be implemented by the brain

    Exploiting syntactic relations for question answering

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 61-66).Recently there has been a resurgent interest in syntax-based approaches to information access, as a means of overcoming the limitations of keyword-based approaches. So far attempts to use syntax have been ad hoc, choosing to use some syntactic information but still ignoring most of the tree structure. This thesis describes the design and implementation of SMARTQA, a proof-of-concept question answering system that compares syntactic trees in a principled manner. Specifically, SMARTQA uses a tree edit-distance algorithm to calculate the similarity between unordered, unrooted syntactic trees. The general case of this problem is NP-complete; in practice, SMARTQA demonstrates that an optimized implementation of the algorithm can be feasibly used for question answering applications.by Daniel Loreto.M.Eng

    Answering definitional questions before they are asked

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 75-77).Most question answering systems narrow down their search space by issuing a boolean IR query on a keyword indexed corpus. This technique often proves futile for definitional questions, because they only contain one keyword or name. Thus, an IR search for only that term is likely to produce many spurious results; documents that contain mentions of the keyword, but not in a definitional context. An alternative approach is to glean the corpus in pre-processing for syntactic constructs in which entities are defined. In this thesis, I describe a regular expression language for detecting such constructs, with the help of a part-of-speech tagger and a named-entity recognizer. My system, named CoL. ForBIN, extracts entities and their definitions, and stores them in a database. This reduces the task of definitional question answering to a simple database lookup.by Aaron D. Fernandes.M.Eng
    corecore