49,596 research outputs found

    On natural language generation of formal argumentation

    Get PDF
    In this paper we provide a first analysis of the research questions that arise when dealing with the problem of communicating pieces of formal argumentation through natural language interfaces. It is a generally held opinion that formal models of argumentation naturally capture human argument, and some preliminary studies have focused on justifying this view. Unfortunately, the results are not only inconclusive, but seem to suggest that explaining formal argumentation to humans is a rather articulated task. Graphical models for expressing argumentation-based reasoning are appealing, but often humans require significant training to use these tools effectively. We claim that natural language interfaces to formal argumentation systems offer a real alternative, and may be the way forward for systems that capture human argument.Publisher PD

    Demo: Making Plans Scrutable with Argumentation and Natural Language Generation.

    Get PDF
    Peer reviewedPublisher PD

    SAsSy ā€“ Scrutable Autonomous Systems

    Get PDF
    Abstract. An autonomous system consists of physical or virtual systems that can perform tasks without continuous human guidance. Autonomous systems are becoming increasingly ubiquitous, ranging from unmanned vehicles, to robotic surgery devices, to virtual agents which collate and process information on the internet. Existing autonomous systems are opaque, limiting their usefulness in many situations. In order to realise their promise, techniques for making such autonomous systems scrutable are therefore required. We believe that the creation of such scrutable autonomous systems rests on four foundations, namely an appropriate planning representation; the use of a human understandable reasoning mechanism, such as argumentation theory; appropriate natural language generation tools to translate logical statements into natural ones; and information presentation techniques to enable the user to cope with the deluge of information that autonomous systems can provide. Each of these foundations has its own unique challenges, as does the integration of all of these into a single system.

    Planning with Incomplete Information

    Full text link
    Planning is a natural domain of application for frameworks of reasoning about actions and change. In this paper we study how one such framework, the Language E, can form the basis for planning under (possibly) incomplete information. We define two types of plans: weak and safe plans, and propose a planner, called the E-Planner, which is often able to extend an initial weak plan into a safe plan even though the (explicit) information available is incomplete, e.g. for cases where the initial state is not completely known. The E-Planner is based upon a reformulation of the Language E in argumentation terms and a natural proof theory resulting from the reformulation. It uses an extension of this proof theory by means of abduction for the generation of plans and adopts argumentation-based techniques for extending weak plans into safe plans. We provide representative examples illustrating the behaviour of the E-Planner, in particular for cases where the status of fluents is incompletely known.Comment: Proceedings of the 8th International Workshop on Non-Monotonic Reasoning, April 9-11, 2000, Breckenridge, Colorad

    The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants

    Full text link
    Reasoning is a crucial part of natural language argumentation. To comprehend an argument, one must analyze its warrant, which explains why its claim follows from its premises. As arguments are highly contextualized, warrants are usually presupposed and left implicit. Thus, the comprehension does not only require language understanding and logic skills, but also depends on common sense. In this paper we develop a methodology for reconstructing warrants systematically. We operationalize it in a scalable crowdsourcing process, resulting in a freely licensed dataset with warrants for 2k authentic arguments from news comments. On this basis, we present a new challenging task, the argument reasoning comprehension task. Given an argument with a claim and a premise, the goal is to choose the correct implicit warrant from two options. Both warrants are plausible and lexically close, but lead to contradicting claims. A solution to this task will define a substantial step towards automatic warrant reconstruction. However, experiments with several neural attention and language models reveal that current approaches do not suffice.Comment: Accepted as NAACL 2018 Long Paper; see details on the front pag
    • ā€¦
    corecore