72,002 research outputs found

    How companies succeed in developing ethical artificial intelligence (AI)

    Full text link
    The rapid advancement of artificial intelligence (AI) has the potential to bring great benefits to society, but also raises important ethical and moral questions. To ensure that AI systems are developed and deployed in a responsible and ethical manner, companies must consider a number of factors, including fairness, accountability, transparency, privacy, and consistency with human values. This essay provides an overview of the key considerations for building an ethical AI system and briefly discusses the challenges, including the importance of developing AI systems with a clear understanding of their potential impact on society and taking steps to mitigate any potential negative consequences. This essay also highlights the need for continuous monitoring and evaluation of AI systems and outlines a strategy, namely an enterprise-wide "Ethics Sheet for AI tasks", to ensure that AI systems are used in an ethical and responsible manner within the company. Ultimately, building an ethical AI system requires a commitment to transparency, accountability, and a clear understanding of the ethical and moral implications of AI technology, and the company must be aware of the long-term consequences of using a non-ethical and morally questionable AI system. (DIPF/Orig.

    Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values

    Get PDF
    Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In this paper, we challenge such a presumption that money must be ‘value-neutral.’ Building on advances in artificial intelligence, cryptography, and machine ethics, we argue that it is possible to design artificially intelligent cryptocurrencies that are not ethically neutral but which autonomously regulate their own use in a way that reflects the ethical values of particular human beings – or even entire human societies. We propose a technological framework for such cryptocurrencies and then analyse the legal, ethical, and economic implications of their use. Finally, we suggest that the development of cryptocurrencies possessing ethical as well as monetary value can provide human beings with a new economic means of positively influencing the ethos and values of their societies

    Preserving a combat commander’s moral agency: The Vincennes Incident as a Chinese Room

    Get PDF
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew of a warship mistakenly shot down a civilian airliner. To support a combat commander’s moral agency, designers should strive for systems that help commanders and command teams to think and manipulate information at the level of meaning. ‘Down conversions’ of information from meaning to symbols must be adequately recovered by ‘up conversions’, and commanders must be able to check that their sensors are working and are being used correctly. Meanwhile ethicists should establish a mechanism that tracks the potential moral implications of choices in a system’s design and intended operation. Finally we highlight a gap in normative ethics, in that we have ways to deny moral agency, but not to affirm it

    Ethically Aligned Design: An empirical evaluation of the RESOLVEDD-strategy in Software and Systems development context

    Full text link
    Use of artificial intelligence (AI) in human contexts calls for ethical considerations for the design and development of AI-based systems. However, little knowledge currently exists on how to provide useful and tangible tools that could help software developers and designers implement ethical considerations into practice. In this paper, we empirically evaluate a method that enables ethically aligned design in a decision-making process. Though this method, titled the RESOLVEDD-strategy, originates from the field of business ethics, it is being applied in other fields as well. We tested the RESOLVEDD-strategy in a multiple case study of five student projects where the use of ethical tools was given as one of the design requirements. A key finding from the study indicates that simply the presence of an ethical tool has an effect on ethical consideration, creating more responsibility even in instances where the use of the tool is not intrinsically motivated.Comment: This is the author's version of the work. The copyright holder's version can be found at https://doi.org/10.1109/SEAA.2019.0001

    A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

    Get PDF
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare
    corecore