32 research outputs found

    Ethical Principles for Reasoning about Value Preferences

    Get PDF
    To ensure alignment with human interests, AI must consider the preferences of stakeholders, which includes reasoning about values and norms. However, stakeholders may have different preferences, and dilemmas can arise concerning conflicting values or norms. My work applies normative ethical principles to resolve dilemma scenarios in satisfactory ways that promote fairness

    Why should we ever automate moral decision making?

    Get PDF
    While people generally trust AI to make decisions in various aspects of their lives, concerns arise when AI is involved in decisions with significant moral implications. The absence of a precise mathematical framework for moral reasoning intensifies these concerns, as ethics often defies simplistic mathematical models. Unlike fields such as logical reasoning, reasoning under uncertainty, and strategic decisionmaking, which have well-defined mathematical frameworks, moral reasoning lacks a broadly accepted framework. This absence raises questions about the confidence we can place in AI's moral decisionmaking capabilities. The environments in which AI systems are typically trained today seem insufficiently rich for such a system to learn ethics from scratch, and even if we had an appropriate environment, it is unclear how we might bring about such learning. An alternative approach involves AI learning from human moral decisions. This learning process can involve aggregating curated human judgments or demonstrations in specific domains, or leveraging a foundation model fed with a wide range of data. Still, concerns persist, given the imperfections in human moral decision making. Given this, why should we ever automate moral decision making - is it not better to leave all moral decision making to humans? This paper lays out a number of reasons why we should expect AI systems to engage in decisions with a moral component, with brief discussions of the associated risks

    Steps Towards Value-Aligned Systems

    Full text link
    Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem. They are indispensable tools for managing the flood of information needed to make effective decisions in a complex world. The current literature is full of examples of how individual artifacts violate societal norms and expectations (e.g. violations of fairness, privacy, or safety norms). Against this backdrop, this discussion highlights an under-emphasized perspective in the literature on assessing value misalignment in AI-equipped sociotechnical systems. The research on value misalignment has a strong focus on the behavior of individual tech artifacts. This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems. We rely primarily on the research on fairness to make our arguments more concrete. And we use the opportunity to highlight how adopting a system perspective improves our ability to explain and address value misalignments better. Our discussion ends with an exploration of priority questions that demand attention if we are to assure the value alignment of whole systems, not just individual artifacts.Comment: Original version appeared in Proceedings of the 2020 AAAI ACM Conference on AI, Ethics, and Society (AIES '20), February 7-8, 2020, New York, NY, USA. 5 pages, 2 figures. Corrected some typos in this versio

    Human-Machine Interaction: Causal Dynamical Networks

    Get PDF
    The objective of this paper is to introduce a modified version of the Causal Dynamical Networks (CDN) algorithm for application in the human-machine interaction. It is demonstrated that an individual does not interact with one robot, but with a multitude of personalities stored in the robot. These personalities are independent of each other. A robot thus does not have a unique personality. In order for a robot to become a unique individual a new algorithm is proposed. The new algorithm is called the Causal Form Fluctuation Network (CEFN). It is shown that such an algorithm can help machines develop similar to human general intelligence capabilities such as interpretation, wisdom (acquiring knowledge), and prediction (intuition). Also to be able to make decisions, have ideas, and imaginations

    A Voting-Based System for Ethical Decision Making

    Get PDF
    We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.Comment: 25 pages; paper has been reorganized, related work and discussion sections have been expande
    corecore