4,635 research outputs found

    Intentional dialogues in multi-agent systems based on ontologies and argumentation

    Get PDF
    Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence

    HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine

    Full text link
    Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g. expert evidence) which might not be part of the prediction process; and providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way. Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity. ANTIDOTE will exploit cross-disciplinary competences in deep learning and argumentation to support a broader and innovative view of explainable AI, where the need for high-quality explanations for clinical cases deliberation is critical. As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular.Comment: To appear: In SEPLN 2023: 39th International Conference of the Spanish Society for Natural Language Processin

    MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue Systems

    Get PDF
    This paper introduces a framework for programming highly sophisticated multi-agent dialogue systems. The framework is based on a multi-part agent belief base consisting of three components: (i) the main component is an extension of an agent-oriented programming belief base for representing defeasible knowledge and, in partic- ular, argumentation schemes; (ii) an ontology component where existing OWL ontologies can be instantiated; and (iii) a theory of mind component where agents keep track of mental attitudes they ascribe to other agents. The paper formalises a structured argumentation-based dialogue game where agents can “digress” from the main dialogue into subdialogues to discuss ontological or theory of mind issues. We provide an example of a dialogue with an ontological digression involving humans and agents, including a chatbot that we developed to support bed allocation in a hospital; we also comment on the initial evaluation of that chatbot carried out by domain experts. That example is also used to show that our framework supports all features of recent desiderata for future dialogue systems.This research was partially funded by CNPq, CAPES, FCT CEECIND /01997/2017 and UIDB/00057/2020

    Explaining BDI Agent Behaviour Through Dialogue

    Get PDF
    This work arose out of conversations at a Lorentz Workshop on the Dynamics of Multi-Agent Systems (2018). Thanks are due Koen Hindriks and Vincent Koeman for their input. The work was supported by the UKRI/EPSRC RAIN [EP/R026084], SSPEDI [EP/P011829/1 ] and FAIR-SPACE [EP/R026092] Robotics and AI Hubs and the Trustworthy Autonomous Systems Verifiability Node [EP/V026801/1]. Both authors contributed equally to the work, and author names are listed in alphabetical order.Peer reviewedPublisher PD
    corecore