681 research outputs found

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Full text link
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Goal-Driven Structured Argumentation for Patient Management in a Multimorbidity Setting

    Get PDF
    We use computational argumentation to both analyse and generate solutions for reasoning in multimorbidity about consistent recommendations, according to different patient-centric goals. Reasoning in this setting carries a complexity related to the multiple variables involved. These variables reflect the co-existing health conditions that should be considered when defining a proper therapy. However, current Clinical Decision Support Systems (CDSSs) are not equipped to deal with such a situation. They do not go beyond the straightforward application of the rules that build their knowledge base and simple interpretation of Computer-Interpretable Guidelines (CIGs). We provide a computational argumentation system equipped with goal-seeking mechanisms to combine independently generated recommendations, with the ability to resolve conflicts and generate explanations for its results. We also discuss its advantages over and relation to Multiple-criteria Decision-making (MCDM) in this particular setting.- (undefined

    Ethical challenges in argumentation and dialogue in a healthcare context.

    Get PDF
    As the average age of the population increases, so too do the number of people living with chronic illnesses. With limited resources available, the development of dialogue-based e-health systems that provide justified general health advice offers a cost-effective solution to the management of chronic conditions. It is however imperative that such systems are responsible in their approach. We present in this paper two main challenges for the deployment of e-health systems, that have a particular relevance to dialogue and argumentation: collecting and handling health data, and trust. For both challenges, we look at specific issues therein, outlining their importance in general, and describing their relevance to dialogue and argumentation. Finally, we go on to propose six recommendations for handling these issues, towards addressing the main challenges themselves, that act both as general advice for dialogue and argumentation research in the e-health domain, and as a foundation for future work on this topic

    Resolving conflicts in clinical guidelines using argumentation

    Get PDF
    Automatically reasoning with conflicting generic clinical guidelines is a burning issue in patient-centric medical reasoning where patient-specific conditions and goals need to be taken into account. It is even more challenging in the presence of preferences such as patient's wishes and clinician's priorities over goals. We advance a structured argumentation formalism for reasoning with conflicting clinical guidelines, patient-specific information and preferences. Our formalism integrates assumption-based reasoning and goal-driven selection among reasoning outcomes. Specifically, we assume applicability of guideline recommendations concerning the generic goal of patient well-being, resolve conflicts among recommendations using patient's conditions and preferences, and then consider prioritised patient-centered goals to yield non-conflicting, goal-maximising and preference-respecting recommendations. We rely on the state-of-the-art Transition-based Medical Recommendation model for representing guideline recommendations and augment it with context given by the patient's conditions, goals, as well as preferences over recommendations and goals. We establish desirable properties of our approach in terms of sensitivity to recommendation conflicts and patient context

    Argumentation-based explanations of multimorbidity treatment plans

    Get PDF
    We present an argumentation model to explain the optimal treatment plans recommended by a Satisfiability Modulo Theories solver for multimorbid patients. The resulting framework can be queried to obtain supporting reasons for nodes on a path following a model of argumentation schemes. The modelling approach is generic and can be used for justifying similar sequences.Postprin

    Development of an Explainability Scale to Evaluate Explainable Artificial Intelligence (XAI) Methods

    Get PDF
    Explainable Artificial Intelligence (XAI) is an area of research that develops methods and techniques to make the results of artificial intelligence understood by humans. In recent years, there has been an increased demand for XAI methods to be developed due to model architectures getting more complicated and government regulations requiring transparency in machine learning models. With this increased demand has come an increased need for instruments to evaluate XAI methods. However, there are few, if none, valid and reliable instruments that take into account human opinion and cover all aspects of explainability. Therefore, this study developed an objective, human-centred questionnaire to evaluate all types of XAI methods. This questionnaire consists of 15 items: 5 items asking about the user’s background information and 10 items evaluating the explainability of the XAI method which were based on the notions of explainability. An experiment was conducted (n = 38) which got participants to evaluate one of two XAI methods using the questionnaire. The results from this experiment were used for exploratory factor analysis which showed that the 10 items related to explainability constitute one factor (Cronbach’s α = 0.81). The results were also used to gather evidence of the questionnaire’s construct validity. It is concluded that this 15-item questionnaire has one factor, has acceptable validity and reliability, and can be used to evaluate and compare XAI methods
    • …
    corecore