21 research outputs found

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Get PDF
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    Combining Explanation and Argumentation in Dialogue

    Get PDF
    Explanation and argumentation can be used together in such a way that evidence, in the form of arguments, is used to support explanations. In a hybrid system, the interlocking of argument and explanation compounds the problem of how to differentiate between them. The distinction is imperative if we want to avoid the mistake of treating something as fallacious while it is not. Furthermore, the two forms of reasoning may influence dialogue protocol and strategy. In this paper a basis for solving the problem is proposed using a dialogue model where the context of the dialogue is used to distinguish argument from explanation

    The Standard Problem

    Full text link
    Crafting, adhering to, and maintaining standards is an ongoing challenge. This paper uses a framework based on common models to explore the standard problem: the impossibility of creating, implementing or maintain definitive common models in an open system. The problem arises from uncertainty driven by variations in operating context, standard quality, differences in implementation, and drift over time. Fitting work by conformance services repairs these gaps between a standard and what is required for interoperation, using several strategies: (a) Universal conformance (all agents access the same standard); (b) Mediated conformance (an interoperability layer supports heterogeneous agents) and (c) Localized conformance, (autonomous adaptive agents manage their own needs). Conformance methods include incremental design, modular design, adaptors, and creating interactive and adaptive agents. Machine learning should have a major role in adaptive fitting. Choosing a conformance service depends on the stability and homogeneity of shared tasks, and whether common models are shared ahead of time or are adjusted at task time. This analysis thus decouples interoperability and standardization. While standards facilitate interoperability, interoperability is achievable without standardization.Comment: Keywords: information standard, interoperability, machine learning, technology evaluation 25 Pages Main text word Count: 5108 Abstract word count: 206 Tables: 1 Figures: 7 Boxes: 2 Submitted to JAMI

    One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency

    Get PDF
    The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats...Comment: Published in the Kunstliche Intelligenz journal, special issue on Challenges in Interactive Machine Learnin

    Conceptual challenges for interpretable machine learning

    Get PDF
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are largely overlooked by authors in this area. I argue that the vast majority of IML algorithms are plagued by (1) ambiguity with respect to their true target; (2) a disregard for error rates and severe testing; and (3) an emphasis on product over process. Each point is developed at length, drawing on relevant debates in epistemology and philosophy of science. Examples and counterexamples from IML are considered, demonstrating how failure to acknowledge these problems can result in counterintuitive and potentially misleading explanations. Without greater care for the conceptual foundations of IML, future work in this area is doomed to repeat the same mistakes
    corecore