112,004 research outputs found

    Toward machines that behave ethically better than humans do

    Get PDF
    With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior

    Building Ethically Bounded AI

    Full text link
    The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI's freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar

    Sociotechnical Systems and Ethics in the Large

    Get PDF
    Advances in AI techniques and computing platforms have triggered a lively and expanding discourse on ethical decision-making by autonomous agents. Much recent work in AI concentrates on the challenges of moral decision making from a decision-theoretic perspective, and especially the representation of various ethical dilemmas. Such approaches may be useful but in general are not productive because moral decision making is as context-driven as other forms of decision making, if not more. In contrast, we consider ethics not from the standpoint of an individual agent but of the wider sociotechnical systems (STS) in which the agent operates. Our contribution in this paper is the conception of ethical STS founded on governance that takes into account stakeholder values, normative constraints on agents, and outcomes (states of the STS) that obtain due to actions taken by agents. An important element of our conception is accountability, which is necessary for adequate consideration of outcomes that prima facie appear ethical or unethical. Focusing on STSs avoids the difficult problems of ethics as the norms of the STS give an operational basis for agent decision making

    Intelligent ethics.

    Get PDF
    This paper discusses the impact of envisaged intelligent applications on the lives of the individuals who may be using them, and investigates the ethical implications of autonomous decision-making that is beyond the control of the user. In an increasingly networked world we look beyond the individual to a social picture of distributed multi-agent interaction, and in particular at the concepts of rules and negotiation between these virtual social agents. We suggest that the use of such agents in a wider social context requires an element of ethical thinking to take place at the grass roots level – that is, with the designers and developers of such systems

    Trust from Ethical Point of View: Exploring Dynamics Through Multiagent-Driven Cognitive Modeling

    Full text link
    The paper begins by exploring the rationality of ethical trust as a foundational concept. This involves distinguishing between trust and trustworthiness and delving into scenarios where trust is both rational and moral. It lays the groundwork for understanding the complexities of trust dynamics in decision-making scenarios. Following this theoretical groundwork, we introduce an agent-based simulation framework that investigates these dynamics of ethical trust, specifically in the context of a disaster response scenario. These agents, utilizing emotional models like Plutchik's Wheel of Emotions and memory learning mechanisms, are tasked with allocating limited resources in disaster-affected areas. The model, which embodies the principles discussed in the first section, integrates cognitive load management, Big Five personality traits, and structured interactions within networked or hierarchical settings. It also includes feedback loops and simulates external events to evaluate their impact on the formation and evolution of trust among agents. Through our simulations, we demonstrate the intricate interplay of cognitive, emotional, and social factors in ethical decision-making. These insights shed light on the behaviors and resilience of trust networks in crisis situations, emphasizing the role of rational and moral considerations in the development of trust among autonomous agents. This study contributes to the field by offering an understanding of trust dynamics in socio-technical systems and by providing a robust, adaptable framework capable of addressing ethical dilemmas in disaster response and beyond. The implementation of the algorithms presented in this paper is available at this GitHub repository: \url{https://github.com/abbas-tari/ethical-trust-cognitive-modeling}.Comment: 10 pages, 8 figure

    The Effects of Automation Transparency and Ethical Outcomes on User Trust and Blame Towards Fully Autonomous Vehicles

    Get PDF
    The current study examined the effect of automation transparency on user trust and blame during forced moral outcomes. Participants read through moral scenarios in which an autonomous vehicle did or did not convey information about its decision prior to making a utilitarian or non-utilitarian decision. Participants also provided moral acceptance ratings for autonomous vehicles and humans when making identical moral decisions. It was expected that trust would be highest for utilitarian outcomes and blame would be highest for non-utilitarian outcomes. When the vehicle provided information about its decision, trust and blame were expected to increase. Results showed that moral outcome and transparency did not influence trust independently. Specifically, trust was highest for non-transparent non- utilitarian outcomes and lowest for non-transparent utilitarian outcomes. Blame was not found to be influenced by either transparency, moral outcome, or their combined effects. Interestingly, acceptance was determined to be higher for autonomous vehicles that made the same utilitarian decision as humans, though no differences were found for non-utilitarian outcomes. This research draws on the importance of active and passive harm and suggests that the type of automation transparency conveyed to an operator may be inappropriate in the presence of actively harmful moral outcomes. Theoretical insights into how ethical decisions are evaluated when different agents (human or autonomous) are responsible for active or passive moral decisions are discussed

    Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM Chatbots Using an Ethics-Based Audit to Assess Moral Reasoning and Normative Values

    Full text link
    With the rise of individual and collaborative networks of autonomous agents, AI is deployed in more key reasoning and decision-making roles. For this reason, ethics-based audits play a pivotal role in the rapidly growing fields of AI safety and regulation. This paper undertakes an ethics-based audit to probe the 8 leading commercial and open-source Large Language Models including GPT-4. We assess explicability and trustworthiness by a) establishing how well different models engage in moral reasoning and b) comparing normative values underlying models as ethical frameworks. We employ an experimental, evidence-based approach that challenges the models with ethical dilemmas in order to probe human-AI alignment. The ethical scenarios are designed to require a decision in which the particulars of the situation may or may not necessitate deviating from normative ethical principles. A sophisticated ethical framework was consistently elicited in one model, GPT-4. Nonetheless, troubling findings include underlying normative frameworks with clear bias towards particular cultural norms. Many models also exhibit disturbing authoritarian tendencies. Code is available at https://github.com/jonchun/llm-sota-chatbots-ethics-based-audit.Comment: 23 pages, 6 figures (3 as tables), 1 table (in LaTeX

    Teaching Autonomous Systems at 1/10th-scale

    Get PDF
    Teaching autonomous systems is challenging because it is a rapidly advancing cross-disciplinary field that requires theory to be continually validated on physical platforms. For an autonomous vehicle (AV) to operate correctly, it needs to satisfy safety and performance properties that depend on the operational context and interaction with environmental agents, which can be difficult to anticipate and capture. This paper describes a senior undergraduate level course on the design, programming and racing of 1/10th-scale autonomous race cars. We explore AV safety and performance concepts at the limits of perception, planning, and control, in a highly interactive and competitive environment. The course includes an ethics-centered design philosophy, which seeks to engage the students in an analysis of ethical and socio-economic implications of autonomous systems. Our hypothesis is that 1/10th-scale autonomous vehicles sufficiently capture the scaled dynamics, sensing modalities, decision making and risks of real autonomous vehicles, but are a safe and accessible platform to teach the foundations of autonomous systems. We describe the design, deployment and feedback from two offerings of this class for college seniors and graduate students, open-source community development across 36 universities, international racing competitions, student skill enhancement and employability, and recommendations for tailoring it to various settings
    • …
    corecore