23 research outputs found

    Motivations and Risks of Machine Ethics

    No full text
    This paper surveys reasons for and against pursuing the field of machine ethics, understood as research aiming to build 'ethical machines.' We clarify the nature of this goal, why it is worth pursuing, and the risks involved in its pursuit. First, we survey and clarify some of the philosophical issues surrounding the concept of an 'ethical machine' and the aims of machine ethics. Second, we argue that while there are good prima facie reasons for pursuing machine ethics, including the potential to improve the ethical alignment of both humans and machines, there are also potential risks that must be considered. Third, we survey these potential risks and point to where research should be devoted to clarifying and managing potential risks. We conclude by making some recommendations about the questions that future work could address

    X-ray diffraction of hair

    No full text

    What is a subliminal technique? An ethical perspective for artificial intelligence systems

    No full text
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concerns relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems that use subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases of AI-based manipulation. We propose a definition of ‘subliminal techniques’ that (a) is grounded on a plausible interpretation of the legal text; (b) addresses all or most of the underlying ethical concerns motivating the prohibition; (c) is defensible from a scientific and philosophical perspective; and (d) does not over-reach in ways that impose excessive administrative and regulatory burdens. The definition is meant to provide guidance for design teams seeking to pursue responsible and ethically aligned AI innovation
    corecore