15 research outputs found

    Strengthening Human Autonomy. In the era of autonomous technology

    Get PDF
    ‘Autonomous technologies’ refers to systems that make decisions without explicit human control or interaction. This conceptual paper explores the notion of autonomy by first exploring human autonomy, and then using this understanding to analyze how autonomous technology could or should be modelled. First, we discuss what human autonomy means. We conclude that it is the overall space for action—rather than the degree of control—and the actual choices, or number of choices, that constitutes human autonomy. Based on this, our second discussion leads us to suggest the term datanomous to denote technology that builds on, and is restricted by, its own data when operating autonomously. Our conceptual exploration brings forth a more precise definition of human autonomy and datanomous systems. Finally, we conclude this exploration by suggesting that human autonomy can be strengthened by datanomous technologies, but only if they support the human space for action. It is the purpose of human activity that determines if technology strengthens or weakens human autonomy

    On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls

    Get PDF
    Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection¼ to identify specific challenges and potential ethical trade-offs when we consider AI in practice.</jats:p

    Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier

    Get PDF
    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.</jats:p

    Online Shaming : Ethical Tools for Human-Computer Interaction Designers

    No full text
    A set of tools – concepts, guidelines, and engineering solutions – are proposed to help human-computer interaction designers build systems that are ethical with regards to online shaming. Online shaming’s ethics are unsolved in the literature, and the phenomenon can have devastating consequences, as well as serve social justice. Kantian ethics, as interpreted by Christine Korsgaard, provide our analytical methodology. Her meta-ethics invokes Wittgenstein’s private language argument, which also models relevant concepts in human-computer interaction theory. Empirical studies and other ethicists’ views on online shaming are presented. Korsgaard’s Kantian methodology is used to evaluate the other ethicists’ views’ moral acceptability, and guidelines are drawn from that analysis. These guidelines permit shaming, with strong constraints. Technical engineering solutions to ethical problems in online shaming are discussed. All these results are situated in the public dialogue on online shaming, and future research from other ethical traditions is suggested

    Online Shaming : Ethical Tools for Human-Computer Interaction Designers

    No full text
    A set of tools – concepts, guidelines, and engineering solutions – are proposed to help human-computer interaction designers build systems that are ethical with regards to online shaming. Online shaming’s ethics are unsolved in the literature, and the phenomenon can have devastating consequences, as well as serve social justice. Kantian ethics, as interpreted by Christine Korsgaard, provide our analytical methodology. Her meta-ethics invokes Wittgenstein’s private language argument, which also models relevant concepts in human-computer interaction theory. Empirical studies and other ethicists’ views on online shaming are presented. Korsgaard’s Kantian methodology is used to evaluate the other ethicists’ views’ moral acceptability, and guidelines are drawn from that analysis. These guidelines permit shaming, with strong constraints. Technical engineering solutions to ethical problems in online shaming are discussed. All these results are situated in the public dialogue on online shaming, and future research from other ethical traditions is suggested

    Artificially Intelligent Black Boxes in Emergency Medicine : An Ethical Analysis

    No full text
    Det blir allt vanligare att föreslÄ att icke-transparant artificiell intelligens, s.k. black boxes, anvÀnds inom akutmedicinen. I denna uppsats anvÀnds etisk analys för att hÀrleda sju riktlinjer för utveckling och anvÀndning av black boxes i akutmedicin. Analysen Àr grundad pÄ sju variationer av ett tankeexperiment som involverar en lÀkare, en black box och en patient med bröstsmÀrta pÄ en akutavdelning. GrundlÀggande begrepp, inklusive artificiell intelligens, black boxes, metoder för transparens, akutmedicin och etisk analys behandlas detaljerat. Tre viktiga omrÄden av etisk vikt identifieras: samtycke; kultur, agentskap och privatliv; och skyldigheter. Dessa omrÄden ger upphov till de sju variationerna. För varje variation urskiljs en viktig etisk frÄga som identifieras och analyseras. En riktlinje formuleras och dess etiska rimlighet testas utifrÄn konsekventialistiska och deontologiska metoder. TillÀmpningen av riktlinjerna pÄ medicin i allmÀnhet, och angelÀgenheten av fortsatt etiska analys av black boxes och artificiell intelligens inom akutmedicin klargörs.Artificially intelligent black boxes are increasingly being proposed for emergency medicine settings; this paper uses ethical analysis to develop seven practical guidelines for emergency medicine black box creation and use. The analysis is built around seven variations of a thought experiment involving a doctor, a black box, and a patient presenting chest pain in an emergency department. Foundational concepts, including artificial intelligence, black boxes, transparency methods, emergency medicine, and ethical analysis are expanded upon. Three major areas of ethical concern are identified, namely consent; culture, agency, and privacy; and fault. These areas give rise to the seven variations. For each, a key ethical question it illustrates is identified and analyzed. A practical guideline is then stated, and its ethical acceptability tested using consequentialist and deontological approaches. The applicability of the guidelines to medicine more generally, and the urgency of continued ethical analysis of black box artificial intelligence in emergency medicine, are clarified

    Artificially Intelligent Black Boxes in Emergency Medicine : An Ethical Analysis

    No full text
    Det blir allt vanligare att föreslÄ att icke-transparant artificiell intelligens, s.k. black boxes, anvÀnds inom akutmedicinen. I denna uppsats anvÀnds etisk analys för att hÀrleda sju riktlinjer för utveckling och anvÀndning av black boxes i akutmedicin. Analysen Àr grundad pÄ sju variationer av ett tankeexperiment som involverar en lÀkare, en black box och en patient med bröstsmÀrta pÄ en akutavdelning. GrundlÀggande begrepp, inklusive artificiell intelligens, black boxes, metoder för transparens, akutmedicin och etisk analys behandlas detaljerat. Tre viktiga omrÄden av etisk vikt identifieras: samtycke; kultur, agentskap och privatliv; och skyldigheter. Dessa omrÄden ger upphov till de sju variationerna. För varje variation urskiljs en viktig etisk frÄga som identifieras och analyseras. En riktlinje formuleras och dess etiska rimlighet testas utifrÄn konsekventialistiska och deontologiska metoder. TillÀmpningen av riktlinjerna pÄ medicin i allmÀnhet, och angelÀgenheten av fortsatt etiska analys av black boxes och artificiell intelligens inom akutmedicin klargörs.Artificially intelligent black boxes are increasingly being proposed for emergency medicine settings; this paper uses ethical analysis to develop seven practical guidelines for emergency medicine black box creation and use. The analysis is built around seven variations of a thought experiment involving a doctor, a black box, and a patient presenting chest pain in an emergency department. Foundational concepts, including artificial intelligence, black boxes, transparency methods, emergency medicine, and ethical analysis are expanded upon. Three major areas of ethical concern are identified, namely consent; culture, agency, and privacy; and fault. These areas give rise to the seven variations. For each, a key ethical question it illustrates is identified and analyzed. A practical guideline is then stated, and its ethical acceptability tested using consequentialist and deontological approaches. The applicability of the guidelines to medicine more generally, and the urgency of continued ethical analysis of black box artificial intelligence in emergency medicine, are clarified

    Artificially Intelligent Black Boxes in Emergency Medicine : An Ethical Analysis

    No full text
    Det blir allt vanligare att föreslÄ att icke-transparant artificiell intelligens, s.k. black boxes, anvÀnds inom akutmedicinen. I denna uppsats anvÀnds etisk analys för att hÀrleda sju riktlinjer för utveckling och anvÀndning av black boxes i akutmedicin. Analysen Àr grundad pÄ sju variationer av ett tankeexperiment som involverar en lÀkare, en black box och en patient med bröstsmÀrta pÄ en akutavdelning. GrundlÀggande begrepp, inklusive artificiell intelligens, black boxes, metoder för transparens, akutmedicin och etisk analys behandlas detaljerat. Tre viktiga omrÄden av etisk vikt identifieras: samtycke; kultur, agentskap och privatliv; och skyldigheter. Dessa omrÄden ger upphov till de sju variationerna. För varje variation urskiljs en viktig etisk frÄga som identifieras och analyseras. En riktlinje formuleras och dess etiska rimlighet testas utifrÄn konsekventialistiska och deontologiska metoder. TillÀmpningen av riktlinjerna pÄ medicin i allmÀnhet, och angelÀgenheten av fortsatt etiska analys av black boxes och artificiell intelligens inom akutmedicin klargörs.Artificially intelligent black boxes are increasingly being proposed for emergency medicine settings; this paper uses ethical analysis to develop seven practical guidelines for emergency medicine black box creation and use. The analysis is built around seven variations of a thought experiment involving a doctor, a black box, and a patient presenting chest pain in an emergency department. Foundational concepts, including artificial intelligence, black boxes, transparency methods, emergency medicine, and ethical analysis are expanded upon. Three major areas of ethical concern are identified, namely consent; culture, agency, and privacy; and fault. These areas give rise to the seven variations. For each, a key ethical question it illustrates is identified and analyzed. A practical guideline is then stated, and its ethical acceptability tested using consequentialist and deontological approaches. The applicability of the guidelines to medicine more generally, and the urgency of continued ethical analysis of black box artificial intelligence in emergency medicine, are clarified

    Online Shaming : Ethical Tools for Human-Computer Interaction Designers

    No full text
    A set of tools – concepts, guidelines, and engineering solutions – are proposed to help human-computer interaction designers build systems that are ethical with regards to online shaming. Online shaming’s ethics are unsolved in the literature, and the phenomenon can have devastating consequences, as well as serve social justice. Kantian ethics, as interpreted by Christine Korsgaard, provide our analytical methodology. Her meta-ethics invokes Wittgenstein’s private language argument, which also models relevant concepts in human-computer interaction theory. Empirical studies and other ethicists’ views on online shaming are presented. Korsgaard’s Kantian methodology is used to evaluate the other ethicists’ views’ moral acceptability, and guidelines are drawn from that analysis. These guidelines permit shaming, with strong constraints. Technical engineering solutions to ethical problems in online shaming are discussed. All these results are situated in the public dialogue on online shaming, and future research from other ethical traditions is suggested
    corecore