2 research outputs found

    Selecting answers to questions from Web documents by a robust validation process

    Get PDF
    International audienceQuestion answering (QA) systems aim at finding answers to question posed in natural language using a collection of documents. When the collection is extracted from the Web, the structure and style of the texts are quite different from those of newspaper articles. We developed a QA system based on an answer validation process able to handle Web specificity. A large number of candidate answers are extracted from short passages in order to be validated according to question and passages characteristics. The validation module is based on a machine learning approach. It takes into account criteria characterizing both the passage and answer relevance at the surface, lexical, syntactic and semantic levels to deal with different types of texts. We present and compare results obtained for factual questions posed on a Web and on a newspaper collection. We show that our system outperforms a baseline by up to 48% in MRR

    F.: The effect of entity recognition on answer validation

    No full text
    Abstract. The Answer Validation Exercise (AVE) 2006 is aimed at evaluating systems able to decide whether the responses of a Question Answering (QA) system are correct or not. Since most of the questions and answers contain entities, the use of a textual entailment relation between entities is studied here for the task of Answer Validation. We present some experiments concluding that the entity entailment relation is a feature that improves a SVM based classifier close to the best result in AVE 2006
    corecore