221 research outputs found

    Answering Complex Questions Using Open Information Extraction

    Full text link
    While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.Comment: Accepted as short paper at ACL 201

    Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks

    Full text link
    Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. {\it Universal schema} can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing \emph{memory networks} to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on \spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 F1F_1 points.\footnote{Code and data available in \url{https://rajarshd.github.io/TextKBQA}}Comment: ACL 2017 (short

    Smart assistance for students and people living in a campus

    Get PDF
    Being part of one of the fastest growing area in Artificial Intelligence (AI), virtual assistants are nowadays part of everyone's life being integrated in almost every smart device. Alexa, Siri, Google Assistant, and Cortana are just few examples of the most famous ones. Beyond these off-the-shelf solutions, different technologies which allow to create custom assistants are available. IBM Watson, for instance, is one of the most widely-adopted question-answering framework both because of its simplicity and accessibility through public APIs. In this work, we present a virtual assistant that exploits the Watson technology to support students and staff of a smart campus at the University of Palermo. Some in progress results show the effectiveness of the approach we propose

    Understanding Knowledge Work and the Performance Potential of its Computerization. Case IBMs Watson

    Get PDF
    This study focuses on theorizing knowledge work and studying the performance potential of computerizing contemporary knowledge work tasks. In the research, the job content of five knowledge workers is described, classification for knowledge work task is formulated, and a framework for knowledge capabilities required in performing knowledge work tasks is constructed. Furthermore, the technological properties of IBM's new questions-answering computer, Watson, are described in general and its knowledge capabilities and performance potential in knowledge work tasks are analysed. The literature review of this research concerns the nature of information and knowledge, knowledge work, and knowledge work performance and productivity. Moreover, two models on cognition are presented that help understanding the mind. Despite the fact that knowledge work has attracted numerous scholarly minds, no clear and concise definition exists. The research was conducted using the methods of qualitative case study and grounded theory. Qualitative case study has been used to describe and explain the multifaceted phenomenon. Grounded theory method has been applied in constructing the framework in the study. Empirical evidence is based on the interviews of five knowledge workers and a technology expert as well as on relevant published data on Watson. Inductive content analysis has been used to study the interview material by categorizing the jobs of the knowledge workers into roles and tasks. The Spaun model and COGNET framework presented in the literature review have worked as intellectual guides in constructing the knowledge agent’s knowledge capabilities framework applied in the analysis. Descriptions on Watson’s technology and capabilities are based on published material and its knowledge capabilities have been analysed by using the knowledge capability framework. Among the key results of this study are the formulation of knowledge work task typology, the construction of framework for knowledge capabilities, and the analysis of Watson's performance potential in various knowledge work tasks types. The findings of Watson's performance potential analysis suggests that it has the greatest performance potential in the task types of answering questions, using analyzing tools to create information and insights, information disseminating, requesting information and delegating. Its lowest performance potential is in the task types of directing discussion, generating ideas and alternative solutions, persuading and negotiating, and discussing and deciding together, according to the results

    Mining Implicit Relevance Feedback from User Behavior for Web Question Answering

    Full text link
    Training and refreshing a web-scale Question Answering (QA) system for a multi-lingual commercial search engine often requires a huge amount of training examples. One principled idea is to mine implicit relevance feedback from user behavior recorded in search engine logs. All previous works on mining implicit relevance feedback target at relevance of web documents rather than passages. Due to several unique characteristics of QA tasks, the existing user behavior models for web documents cannot be applied to infer passage relevance. In this paper, we make the first study to explore the correlation between user behavior and passage relevance, and propose a novel approach for mining training data for Web QA. We conduct extensive experiments on four test datasets and the results show our approach significantly improves the accuracy of passage ranking without extra human labeled data. In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine, especially for languages with low resources. Our techniques have been deployed in multi-language services.Comment: Accepted by KDD 202
    • …
    corecore