5 research outputs found

    The Exploratory Role of Explainable Artificial Intelligence

    Get PDF
    Models developed using machine learning (ML) are increasingly prevalent in scientific research. Because many of these models are opaque, techniques from Explainable AI (XAI) have been developed to render them transparent. But XAI is more than just the solution to the problems that opacity poses—it also plays an invaluable exploratory role. In this paper, we demonstrate that current XAI techniques can be used to (1) better understand what an ML model is a model of, (2) engage in causal inference over high-dimensional nonlinear systems, and (3) generate algorithmic-level hypotheses in cognitive science

    Moral Attitudes Toward Pharmacological Cognitive Enhancement (PCE): Differences and Similarities Among Germans With and Without PCE Experience

    Get PDF
    Pharmacological cognitive enhancement (PCE), the use of illicit and/or prescription drugs to increase cognitive performance, has spurred controversial discussion in bioethics. In a semi-structured interview study with 60 German university students and employees, differences and similarities in moral attitudes toward PCE among 30 experienced participants (EPs) vs. 30 inexperienced participants (IPs) were investigated. Substances EPs used most often are methylphenidate, amphetamines, tetrahydrocannabinol and modafinil. Both EPs and IPs addressed topics such as autonomous decision making or issues related to fairness such as equality in test evaluation and distortion of competition. While most EPs and IPs were convinced that the decision of whether or not to use PCE is part of their individual freedom, their views varied considerably with regard to fairness. IPs considered issues related to fairness as much more critical than EPs. Thus, a person’s moral attitudes toward PCE may not only depend on moral common sense, but also on whether they have used illegal and/or prescription drugs for PCE before. This points to the importance of including the various relevant stakeholder perspectives in debates on the ethical and social implications of PCE

    Moral Attitudes Toward Pharmacological Cognitive Enhancement (PCE): Differences and Similarities Among Germans With and Without PCE Experience

    Get PDF
    Pharmacological cognitive enhancement (PCE), the use of illicit and/or prescription drugs to increase cognitive performance, has spurred controversial discussion in bioethics. In a semi-structured interview study with 60 German university students and employees, differences and similarities in moral attitudes toward PCE among 30 experienced participants (EPs) vs. 30 inexperienced participants (IPs) were investigated. Substances EPs used most often are methylphenidate, amphetamines, tetrahydrocannabinol and modafinil. Both EPs and IPs addressed topics such as autonomous decision making or issues related to fairness such as equality in test evaluation and distortion of competition. While most EPs and IPs were convinced that the decision of whether or not to use PCE is part of their individual freedom, their views varied considerably with regard to fairness. IPs considered issues related to fairness as much more critical than EPs. Thus, a person’s moral attitudes toward PCE may not only depend on moral common sense, but also on whether they have used illegal and/or prescription drugs for PCE before. This points to the importance of including the various relevant stakeholder perspectives in debates on the ethical and social implications of PCE

    Scientific Exploration and Explainable Artificial Intelligence

    Get PDF
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future investigations of (potentially) causal relationships, and to generate possible explanations of target phenomena in cognitive science. In this way, this paper describes how Explainable AI—over and above machine learning itself—contributes to the efficiency and scope of data-driven scientific research
    corecore