19 research outputs found

    The risks of autonomous machines:from responsibility gaps to control gaps

    Get PDF
    Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The real problem lies elsewhere: autonomous machines should be built so as to exhibit a level of risk that is morally acceptable. If they fall short of this standard, they exhibit what we call ‘a control gap.’ The causal control that autonomous machines have will then fall short of the guidance control they should emulate

    Una mirada crítica a la ética de la IA: de preocupaciones emergentes y principios orientadores a un desvelar ético

    Get PDF
    En este artículo examino el estado actual de la “segunda ola” de la ética en la inteligencia artificial (IA), la cual se centra en la integración de principios éticos fundamentales como la justicia, la privacidad, la transparencia y la explicabilidad en el diseño, uso e implementación de sistemas de IA. Argumento que, aunque esta fase ha sido criticada por su naturaleza abstracta y su falta de contextualización, es imperativo que la emergente “tercera ola” adopte un cambio paradigmático hacia un “desvelar ético”. Propongo que este desvelar ético debería involucrar un profundo y continuo proceso hermenéutico que no solo interprete cómo las tecnologías de IA reconfiguran nuestras estructuras sociales, políticas y personales, sino que también actúe como un medio de liberación del enmarcamiento tecnológico Heideggeriano. Este enfoque sugiere que la ética debe ser considerada no solo como un complemento, sino como un componente esencial y fundamental en el ciclo de vida del desarrollo de la IA, fomentando así una integración más profunda de consideraciones éticas que guíen tanto la innovación tecnológica como su implementación práctica

    Miről szól(hatna) a felelősségteljes kutatás és innováció? : Rendszerkonform versus transzformatív megközelítés

    Get PDF
    A felelősségteljes kutatás és innováció elvei fontos témákra világítanak rá, de nagyon eltérő értelmezéseket és gyakorlatot tesznek lehetővé. Tanulmányunk e témakör rendszer­konform és transzformatív megközelítését ütközteti, amihez felhasználja Lukovics Miklós és szerzőtársai Közgazdasági Szemlében nemrég megjelent írását, amely jól példázza a rendszerkonform megközelítést. Amellett érvelünk, hogy a témakör rendszerkonform vitájában lényeges szempontok és kérdéskörök kerülnek partvonalon kívülre. A rendszerkonform megközelítés megoldást ígér a kutatás-fejlesztési és innovációs rendszerben felmerülő környezeti, társadalmi, etikai kihívásokra. Ám valójában beszűkíti a viták és lehetséges megoldások terét, és eltereli a figyelmet a rendszer működési logikáját érintő problémákról. A transzformatív megközelítés ezzel szemben nem inkrementális, a rendszer jelen logikáján belüli megoldásokat ígér, hanem a kutatási és innovációs folyamatok mögött meghúzódó etikai és politikai előfeltevések felszínre hozásához és megvitatásához kíván teret kínálni.* Journal of Economic Literature (JEL) kód: D70, D80, O10, O30

    The Effects of Automation Transparency and Ethical Outcomes on User Trust and Blame Towards Fully Autonomous Vehicles

    Get PDF
    The current study examined the effect of automation transparency on user trust and blame during forced moral outcomes. Participants read through moral scenarios in which an autonomous vehicle did or did not convey information about its decision prior to making a utilitarian or non-utilitarian decision. Participants also provided moral acceptance ratings for autonomous vehicles and humans when making identical moral decisions. It was expected that trust would be highest for utilitarian outcomes and blame would be highest for non-utilitarian outcomes. When the vehicle provided information about its decision, trust and blame were expected to increase. Results showed that moral outcome and transparency did not influence trust independently. Specifically, trust was highest for non-transparent non- utilitarian outcomes and lowest for non-transparent utilitarian outcomes. Blame was not found to be influenced by either transparency, moral outcome, or their combined effects. Interestingly, acceptance was determined to be higher for autonomous vehicles that made the same utilitarian decision as humans, though no differences were found for non-utilitarian outcomes. This research draws on the importance of active and passive harm and suggests that the type of automation transparency conveyed to an operator may be inappropriate in the presence of actively harmful moral outcomes. Theoretical insights into how ethical decisions are evaluated when different agents (human or autonomous) are responsible for active or passive moral decisions are discussed

    Skillful coping with and through technologies: Some challenges and avenues for a Dreyfus-inspired philosophy of technology

    Get PDF
    Open access articleDreyfus’s work is widely known for its critique of artificial intelligence and still stands as an example of how to do excellent philosophical work that is at the same time relevant to contemporary technological and scientific developments. But for philosophers of technology, especially for those sympathetic to using Heidegger, Merleau-Ponty, and Wittgenstein as sources of inspiration, it has much more to offer. This paper outlines Dreyfus’s account of skillful coping and critically evaluates its potential for thinking about technology. First, it is argued that his account of skillful coping can be developed into a general view about handling technology which gives due attention to know-how/implicit knowledge and embodiment. Then a number of outstanding challenges are identified that are difficult to cope with if one remains entirely within the world of Dreyfus’s writings. They concern (1) questions regarding other conceptualizations of technology and human–technology relations, (2) issues concerning how to conceptualize the social and the relation between skill, meaning, and practices, and (3) the question about the ethical and political implications of his view, including how virtue and skill are related. Acknowledging some known discussions about Dreyfus’s work, but also drawing on other material and on the author’s previous writings, the paper suggests that to address these challenges and develop the account of skillful coping into a wider scoped, Dreyfus-inspired philosophy of technology, it could take more distance from Heidegger’s conceptions of technology and benefit from (more) engagement with work in postphenomenology (Ihde), pragmatism (Dewey), the later Wittgenstein, and virtue ethics

    Moral zombies: why algorithms are not moral agents

    Get PDF
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons

    Emerging Urban Mobility Technologies through the Lens of Everyday Urban Aesthetics : Case of Self-Driving Vehicle

    Get PDF
    The goal of this article is to deepen the concept of emerging urban mobility technology using aesthetics of everyday mobility as a lens for bringing in important experiential and value-driven dimensions. Drawing on philosophical everyday and urban aesthetics, as well as the postphenomenological strand in the philosophy of technology, we explicate the relation between everyday aesthetic experience and urban mobility. By doing this, we shed light on the central role of aesthetics for providing depth to the multidimensional meaning of contemporary urban mobility. We use the example of self-driving vehicle (SDV), as potentially mundane, public, dynamic, and social urban robots, for expanding the range of perspectives relevant for understanding urban mobility technology. We present the range of existing SDV conceptualizations and contrast them with aesthetic and experiential understanding of urban mobility. In conclusion, we reflect on new pathways for speculative thinking about urban mobility futures and development of responsible innovation processes.The goal of this article is to deepen the concept of emerging urban mobility technology. Drawing on philosophical everyday and urban aesthetics, as well as the postphenomenological strand in the philosophy of technology, we explicate the relation between everyday aesthetic experience and urban mobility commoning. Thus, we shed light on the central role of aesthetics for providing depth to the important experiential and value-driven meaning of contemporary urban mobility. We use the example of self-driving vehicle (SDV), as potentially mundane, public, dynamic, and social urban robots, for expanding the range of perspectives relevant for our relations to urban mobility technology. We present the range of existing SDV conceptualizations and contrast them with experiential and aesthetic understanding of urban mobility. In conclusion, we reflect on the potential undesired consequences from the depolitization of technological development, and potential new pathways for speculative thinking concerning urban mobility futures in responsible innovation processes.Peer reviewe

    Coordinated Control Design for Ethical Maneuvering of Autonomous Vehicles

    Get PDF
    This paper proposes a coordinated control design method, with which the autonomous vehicle is able to perform ethical maneuvers. The starting point of the provided method is a thorough analysis on the ethical concepts for autonomous vehicle control design methods. Using the results of the analysis, an own concept is provided based on some principles of Protestant ethics. The concept focuses on improving trust in vehicle control through clear rules and predictable vehicle motion, and it is in line with the state-of-the-art ethical vehicle control methods. Moreover, an optimal Model Predictive Control (MPC) design method is formed, in which the provided ethical concept is incorporated. The outputs of the optimal control are steering angle and velocity profile, with which the ethical maneuvering can be achieved. The contribution of the paper is a coordinated control design method, which is able to involve ethical principles. Moreover, the application of Protestant ethics in this context is also a novel achievement in the paper. The effectiveness of the method through different simulation scenarios is illustrated

    Mind the Gaps: Assuring the Safety of Autonomous Systems from an Engineering, Ethical, and Legal Perspective

    Get PDF
    This paper brings together a multi-disciplinary perspective from systems engineering, ethics, and law to articulate a common language in which to reason about the multi-faceted problem of assuring the safety of autonomous systems. The paper's focus is on the “gaps” that arise across the development process: the semantic gap, where normal conditions for a complete specification of intended functionality are not present; the responsibility gap, where normal conditions for holding human actors morally responsible for harm are not present; and the liability gap, where normal conditions for securing compensation to victims of harm are not present. By categorising these “gaps” we can expose with greater precision key sources of uncertainty and risk with autonomous systems. This can inform the development of more detailed models of safety assurance and contribute to more effective risk control
    corecore