33 research outputs found

    Historical overview of formal argumentation

    Get PDF

    Historical overview of formal argumentation

    Get PDF

    Historical overview of formal argumentation

    Get PDF

    Historical overview of formal argumentation

    Get PDF

    An Argumentation-Based Approach to Normative Practical Reasoning

    Get PDF

    How to Deal with Unbelievable Assertions

    Get PDF
    We tackle the problem that arises when an agent receives unbelievable information. Information is unbelievable if it conflicts with the agent’s convictions, that is, what the agent considers knowledge. We propose two solutions based on modifying the information so that it is no longer unbelievable. In one solution, the source and the receiver of the information cooperatively resolve the conflict. For this purpose we introduce a dialogue protocol in which the receiver explains what is wrong with the information by using logical interpolation, and the source produces a new assertion accordingly. If such cooperation is not possible, we propose an alternative solution in which the receiver revises the new piece of information by its own convictions to make it acceptable.Peer reviewe

    Intentional dialogues in multi-agent systems based on ontologies and argumentation

    Get PDF
    Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence
    corecore