3,227 research outputs found

    Question Answering: CNLP at the TREC-10 Question Answering Track

    Get PDF
    This paper describes the retrieval experiments for the main task and list task of the TREC-10 question answering track. The question answering system described automatically finds answers to questions in a large document collection. The system uses a two-stage retrieval approach to answer finding based on matching of named entities, linguistic patterns, and keywords. In answering a question, the system carries out a detailed query analysis that produces a logical query representation, an indication of the question focus, and answer clue words

    Question Answering: CNLP at the TREC-2002 Question Answering Track

    Get PDF
    This paper describes the retrieval experiments for the main task and list task of the TREC-2002 question-answering track. The question answering system described automatically finds answers to questions in a large document collection. The system uses a two-stage retrieval approach to answer finding based on matching of named entities, linguistic patterns, keywords, and the use of a new inference module. In answering a question, the system carries out a detailed query analysis that produces a logical query representation, an indication of the question focus, and answer clue words

    Cross-lingual Question Answering with QED

    Get PDF
    We present improvements and modifications of the QED open-domain question answering system developed for TREC-2003 to make it cross-lingual for participation in the CrossLinguistic Evaluation Forum (CLEF) Question Answering Track 2004 for the source languages French and German and the target language English. We use rule-based question translation extended with surface pattern-oriented pre- and post-processing rules for question reformulation to create and English query from its French or German original. Our system uses deep processing for the question and answers, which requires efficient and radical prior search space pruning. For answering factoid questions, we report an accuracy of 16% (German to English) and 20% (French to English), respectively

    Question Answering: CNLP at the TREC-9 Question Answering Track

    Get PDF
    This paper describes a question answering system that automatically finds answers to questions in a large collection of documents. The prototype CNLP question answering system was developed for participation in the TREC-9 question answering track. The system uses a two-stage retrieval approach to answer finding based on keyword and named entity matching. Results indicate that the system ranks correct answers high (mostly rank 1), provided that an answer to the question was found. Performance figures and further analyses are included

    Answering List and Other questions

    Get PDF
    The importance of Question Answering is growing with the expansion of information and text documents on the web. Techniques in Question Answering have significantly improved during the last decade especially after the introduction of TREC Question Answering track. Most work in this field has been done on answering Factoid questions. In this thesis, however, we present and evaluate two approaches to answering List and Other types of questions which are as important but have not been investigated as much as Factoid questions. Although answering List questions is not a new research area, answering them automatically still remains a challenge. The median F-score of systems that participated at the TREC-2007 Question Answering track is still very low (0.085) while 74% of the questions had a median F-score of 0. In this thesis, we propose a novel approach to answering List questions. This approach is based on the hypothesis that the answer instances to a List question co-occur within sentences of the documents related to the question and the topic. We use a clustering method to group the candidate answers that co-occur more often. To pinpoint the right cluster, we use the target and the question keywords as spies . Using this approach, our system placed fourth among 21 teams in the TREC-2007 QA track with F-score 0.145. Other questions have been introduced in the TREC-QA track to retrieve other interesting facts about a topic. In our thesis, Other questions are answered using the notion of interest marking terms. To answer this type of questions, our system extracts, from Wikipedia articles, a list of interest marking terms related to the topic and uses them to extract and score sentences from the document collection where the answer should be found. Sentences are then re-ranked using universal interest-markers that are not specific to the topic. The top sentences are then returned as possible answers. To evaluate our approach, we participated in the TREC-2006 and TREC-2007 QA tracks. Using this approach, our system placed third in both years with F-score 0.199 and 0.281 respectively

    How to Evaluate your Question Answering System Every Day and Still Get Real Work Done

    Full text link
    In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.Comment: 6 pages, 3 figures, to appear in Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000

    University of Sheffield TREC-8 Q & A System

    Get PDF
    The system entered by the University of Sheffield in the question answering track of TREC-8 is the result of coupling two existing technologies - information retrieval (IR) and information extraction (IE). In essence the approach is this: the IR system treats the question as a query and returns a set of top ranked documents or passages; the IE system uses NLP techniques to parse the question, analyse the top ranked documents or passages returned by the IR system, and instantiate a query variable in the semantic representation of the question against the semantic representation of the analysed documents or passages. Thus, while the IE system by no means attempts “full text understanding", this approach is a relatively deep approach which attempts to work with meaning representations. Since the information retrieval systems we used were not our own (AT&T and UMass) and were used more or less “off the shelf", this paper concentrates on describing the modifications made to our existing information extraction system to allow it to participate in the Q & A task
    • …
    corecore