2 research outputs found

    Interpretation of Natural Language Rules in Conversational Machine Reading

    Get PDF
    Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader's background knowledge. One example is the task of interpreting regulations to answer "Can I...?" or "Do I have to...?" questions such as "I am working in Canada. Do I have to carry on paying UK National Insurance?" after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as "How long have you been working abroad?" when the answer cannot be directly derived from the question and text. In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed.Comment: EMNLP 201

    Explanation in information systems: A design rationale approach.

    Get PDF
    This dissertation investigates the relationship between the information systems (IS) development context, and the context in which such systems are used. Misunderstandings and ambiguities emerge in the space between these contexts and often result in construction of systems that fail to meet the requirements and expectations of their intended users. This study explores this problem using an approach derived from three largely separate and distinct fields: explanation facilities in information systems, theories of explanation, and design rationale. Explanation facilities are typically included in knowledge-based information systems, where their purpose is to provide system users with the underlying reasons for why the system reaches a particular conclusion or makes a particular recommendation. Prior research suggests that the presence of an explanation facility leads to increased acceptance of these conclusions and recommendations, therefore enhancing system usability. Theory of explanation is a field of study in which philosophers attempt to describe the unique nature of explanation and to identify criteria for explanation evaluation. Design rationale research is concerned with the capture, representation, and use of the deep domain and artefact knowledge that emerges from the design process. The design rationale approach goes beyond specification and suggests that to understand a system requires knowledge of the arguments that led to its realisation. This study proposes a model of IS explanation structure and content derived from formal theories of explanation with a method for obtaining this content based on design rationale. The study has four goals: to derive a theory of explanation specific to the domain of information systems; to examine this definition empirically through a study involving IS development and management professionals; to investigate in a case study whether the information needed to populate the explanation model can be captured using design rationale techniques; and construction of prototype software to deliver explanations per the proposed framework
    corecore