150 research outputs found

    How to Evaluate your Question Answering System Every Day and Still Get Real Work Done

    Full text link
    In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.Comment: 6 pages, 3 figures, to appear in Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000

    Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution

    Get PDF
    Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed

    SEMONTOQA: A Semantic Understanding-Based Ontological Framework for Factoid Question Answering

    Get PDF
    This paper presents an outline of an Ontological and Se- mantic understanding-based model (SEMONTOQA) for an open-domain factoid Question Answering (QA) system. The outlined model analyses unstructured English natural lan- guage texts to a vast extent and represents the inherent con- tents in an ontological manner. The model locates and ex- tracts useful information from the text for various question types and builds a semantically rich knowledge-base that is capable of answering different categories of factoid ques- tions. The system model converts the unstructured texts into a minimalistic, labelled, directed graph that we call a Syntactic Sentence Graph (SSG). An Automatic Text In- terpreter using a set of pre-learnt Text Interpretation Sub- graphs and patterns tries to understand the contents of the SSG in a semantic way. The system proposes a new fea- ture and action based Cognitive Entity-Relationship Net- work designed to extend the text understanding process to an in-depth level. Application of supervised learning allows the system to gradually grow its capability to understand the text in a more fruitful manner. The system incorpo- rates an effective Text Inference Engine which takes the re- sponsibility of inferring the text contents and isolating enti- ties, their features, actions, objects, associated contexts and other properties, required for answering questions. A similar understanding-based question processing module interprets the user’s need in a semantic way. An Ontological Mapping Module, with the help of a set of pre-defined strategies de- signed for different classes of questions, is able to perform a mapping between a question’s ontology with the set of ontologies stored in the background knowledge-base. Em- pirical verification is performed to show the usability of the proposed model. The results achieved show that, this model can be used effectively as a semantic understanding based alternative QA system

    The Only Constant Is Change: A Narrative on Ten Years of Collaborative Chat Reference Service at San Jose Public Library

    Get PDF
    This article documents and highlights the evolution of collaborative, web-based chat reference service at a large metropolitan public library from 2000 to 2010

    Ten years of scientific support for integrating circular economy requirements in the EU ecodesign directive: Overview and lessons learnt

    Get PDF
    The paper presents and analyses the REAPro Research programme led at the JRC that allowed the Commission to move from the formulation in 2011 of a general policy need to improve circularity of products through design, to the concrete implementation in 2019 of innovative and ambitious circular economy criteria in entry market European legislation. This policy innovation entailed the robust development of complementary components along the policy process, including policy agenda setting (better formulation of the policy need), policy formulation (e.g. identification of indicators to measure resource efficiency of products), and policy implementation (initiation of standardization activities). The paper looks back into 10 years of scientific support to policy and draws some conclusions concerning the needs of scientific support for policy making

    A discourse-based approach for Arabic question answering

    Get PDF
    The treatment of complex questions with explanatory answers involves searching for arguments in texts. Because of the prominent role that discourse relations play in reflecting text-producers’ intentions, capturing the underlying structure of text constitutes a good instructor in this issue. From our extensive review, a system for automatic discourse analysis that creates full rhetorical structures in large scale Arabic texts is currently unavailable. This is due to the high computational complexity involved in processing a large number of hypothesized relations associated with large texts. Therefore, more practical approaches should be investigated. This paper presents a new Arabic Text Parser oriented for question answering systems dealing with Ù„Ù…Ű§Ű°Ű§ “why” and كيف “how to” questions. The Text Parser presented here considers the sentence as the basic unit of text and incorporates a set of heuristics to avoid computational explosion. With this approach, the developed question answering system reached a significant improvement over the baseline with a Recall of 68% and MRR of 0.62

    Computing at Lehigh

    Get PDF

    Linguistic survey of south-eastern Queensland

    Get PDF

    The Scrivener’s Secrets Seen Through the Spyglass: GCHQ and the International Right to Journalistic Expression

    Get PDF
    As part of the U.K.’s electronic surveillance program, the Government Communications Headquarters (GCHQ), started in 1909 to combat German Spies, now collects metadata from both foreigners and its own citizens. Through the express statutory authority of the Regulation of Investigatory Powers Act of 2000 (RIPA), and a loophole in section 94 of the Telecommunications Act of 1984, the GCHQ collects metadata, which is all of the information that is extrinsic to the actual contents of a communication. The GCHQ can request an authorization from a public authority—a member of its own staff—to collect traffic data, service use information, or subscriber information, either from the relevant communications service provider or through their own interception of traffic. This metadata collection interferes with journalists’ ability to function as the pillar of democratic society that the international community expressly values. Specifically, The U.K government’s statutory scheme is in contravention of the freedom of journalistic expression contained within Article 10 of the European Convention on Human Rights. This Note argues that the U.K. government must change its legislative, philosophical, and administrative approaches to electronic surveillance to come into comport with European democratic principles
    • 

    corecore