11 research outputs found

    Scenarios as Tools of the Scientific Imagination: The Case of Climate Projections

    Get PDF

    Machine Learning and the Future of Scientific Explanation

    Get PDF

    The Integrative Expert: Moral, Epistemic, and Poietic Virtues in Transformation Research

    Get PDF
    Over the past 50 years, policy makers have sought to shape new and emerging technologies in light of societal risks, public values, and ethical concerns. While much of this work has taken place during “upstream” research prioritization and “downstream” technology regulation, the actual “midstream” work of engineers and other technical experts has increasingly been seen as a site for governing technology in society. This trend towards “socio-technical integration” is reflected in various governance frameworks such as Sustainable Development (SD), Technology Assessment (TA), and Responsible Innovation (RI) that are at the center of transformation research. Discussions around SD, TA, and RI often focus on meso- and macro-level processes and dynamics, with less attention paid to the qualities of individuals that are needed to support transformation processes. We seek to highlight the importance of micro-level practices by drawing attention to the virtues of technical experts. Drawing on empirical study results from embedding philosophical-reflective dialogues within science and engineering laboratories, we claim that poietic, as well as moral and epistemic, virtues belong to those required of technical experts who foster integrative practices in transformation research

    What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI)

    Get PDF
    The increasing ubiquity of machine learning (ML) motivates research on algorithms to “explain” models and their predictions—so-called Explainable Artificial Intelligence (XAI). Despite many publications and discussions, the goals and capabilities of such algorithms are far from being well understood. We argue that this is because of a problematic reasoning scheme in the literature: Such algorithms are said to complement machine learning models with desired capabilities, such as interpretability or explainability. These capabilities are in turn assumed to contribute to a goal, such as trust in a system. But most capabilities lack precise definitions and their relationship to such goals is far from obvious. The result is a reasoning scheme that obfuscates research results and leaves an important question unanswered: What can one expect from XAI algorithms? In this paper, we clarify the modest capabilities of these algorithms from a concrete perspective: that of their users. We show that current algorithms can only answer user questions that can be traced back to the question: “How can one represent an ML model as a simple function that uses interpreted attributes?”. Answering this core question can be trivial, difficult or even impossible, depending on the application. The result of the paper is the identification of two key challenges for XAI research: the approximation and the translation of ML models

    Designing as playing games of make-believe

    Get PDF
    Designing complex products involves working with uncertainties as the product, the requirements and the environment in which it is used co-evolve, and designers and external stakeholders make decisions that affect the evolving design. Rather than being held back by uncertainty, designers work, cooperate and communicate with each other notwithstanding these uncertainties by making assumptions to carry out their own tasks. To explain this, the paper proposes an adaptation of Kendall Walton’s make-believe theory, to conceptualize designing as playing games of make-believe by inferring what is required and imagining what is possible given the current set of assumptions and decisions, while knowing these are subject to change. What one is allowed and encouraged to imagine, conclude or propose is governed by socially agreed rules and constraints. The paper uses jet engine component design as an example to illustrate how different design teams make assumptions at the beginning of design activities and negotiate what can and cannot be done with the design. This often involves iteration – repeating activities under revised sets of assumptions. As assumptions are collectively revised they become part of a new game of make-believe in the sense that there is social agreement that the decisions constitute part of the constraints that govern what can legitimately be inferred about the design or added to it

    What do algorithms explain? The issue of the goals and capabilities of Explainable Artificial Intelligence (XAI)

    No full text
    Abstract The increasing ubiquity of machine learning (ML) motivates research on algorithms to “explain” models and their predictions—so-called Explainable Artificial Intelligence (XAI). Despite many publications and discussions, the goals and capabilities of such algorithms are far from being well understood. We argue that this is because of a problematic reasoning scheme in the literature: Such algorithms are said to complement machine learning models with desired capabilities, such as interpretability or explainability. These capabilities are in turn assumed to contribute to a goal, such as trust in a system. But most capabilities lack precise definitions and their relationship to such goals is far from obvious. The result is a reasoning scheme that obfuscates research results and leaves an important question unanswered: What can one expect from XAI algorithms? In this paper, we clarify the modest capabilities of these algorithms from a concrete perspective: that of their users. We show that current algorithms can only answer user questions that can be traced back to the question: “How can one represent an ML model as a simple function that uses interpreted attributes?”. Answering this core question can be trivial, difficult or even impossible, depending on the application. The result of the paper is the identification of two key challenges for XAI research: the approximation and the translation of ML models

    Representation and similarity: Suarez on necessary and sufficient conditions of scientific representation

    No full text
    The notion of scientific representation plays a central role in current debates on modeling in the sciences. One or maybe the major epistemic virtue of successful models is their capacity to adequately represent specific phenomena or target systems. According to similarity views of scientific representation, models should be similar to their corresponding targets in order to represent them. In this paper, Suarez’s arguments against similarity views of representation will be scrutinized. The upshot is that the intuition that scientific representation involves similarity is not refuted by the arguments. The arguments do not make the case for the strong claim that similarity between vehicles and targets is neither necessary nor sufficient for scientific representation. Especially, one claim that a similarity view wants to uphold, still, is the following thesis: only if a vehicle is similar to a target in relevant respects and to a specific degree of similarity then the vehicle is a scientific representation of that target.Ethics & Philosophy of Technolog
    corecore