1,233 research outputs found

    Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions

    Full text link
    The rapid adoption of artificial intelligence (AI) necessitates careful analysis of its ethical implications. In addressing ethics and fairness implications, it is important to examine the whole range of ethically relevant features rather than looking at individual agents alone. This can be accomplished by shifting perspective to the systems in which agents are embedded, which is encapsulated in the macro ethics of sociotechnical systems (STS). Through the lens of macro ethics, the governance of systems - which is where participants try to promote outcomes and norms which reflect their values - is key. However, multiple-user social dilemmas arise in an STS when stakeholders of the STS have different value preferences or when norms in the STS conflict. To develop equitable governance which meets the needs of different stakeholders, and resolve these dilemmas in satisfactory ways with a higher goal of fairness, we need to integrate a variety of normative ethical principles in reasoning. Normative ethical principles are understood as operationalizable rules inferred from philosophical theories. A taxonomy of ethical principles is thus beneficial to enable practitioners to utilise them in reasoning. This work develops a taxonomy of normative ethical principles which can be operationalized in the governance of STS. We identify an array of ethical principles, with 25 nodes on the taxonomy tree. We describe the ways in which each principle has previously been operationalized, and suggest how the operationalization of principles may be applied to the macro ethics of STS. We further explain potential difficulties that may arise with each principle. We envision this taxonomy will facilitate the development of methodologies to incorporate ethical principles in reasoning capacities for governing equitable STS

    Desen: Specification of Sociotechnical Systems via Patterns of Regulation and Control

    Get PDF
    We address the problem of engineering a sociotechnical system (STS) with respect to its stakeholders’ requirements. We motivate a two-tier STS conception comprising a technical tier that provides control mechanisms and describes what actions are allowed by the software components, and a social tier that characterizes the stakeholders’ expectations of each other in terms of norms. We adopt agents as computational entities, each representing a different stakeholder. Unlike previous approaches, our framework, Desen, incorporates the social dimension into the formal verification process. Thus, Desen supports agents potentially violating applicable norms—a consequence of their autonomy. In addition to requirements verification, Desen supports refinement of STS specifications via design patterns to meet stated requirements. We evaluate Desen at three levels. We illustrate how Desen carries out refinement via the application of patterns on a hospital emergency scenario. We show via a human-subject study that a design process based on our patterns is helpful for participants who are inexperienced in conceptual modeling and norms. We provide an agent-based environment to simulate the hospital emergency scenario to compare STS specifications (including participant solutions from the human-subject study) with metrics indicating social welfare and norm compliance, and other domain dependent metrics

    Prosocial Norm Emergence in Multiagent Systems

    Get PDF

    Normative Emotional Agents: a viewpoint paper

    Get PDF
    [EN] Human social relationships imply conforming to the norms, behaviors and cultural values of the society, but also socialization of emotions, to learn how to interpret and show them. In multiagent systems, much progress has been made in the analysis and interpretation of both emotions and norms. Nonetheless, the relationship between emotions and norms has hardly been considered and most normative agents do not consider emotions, or vice-versa. In this article, we provide an overview of relevant aspects within the area of normative agents and emotional agents. First we focus on the concept of norm, the different types of norms, its life cycle and a review of multiagent normative systems. Secondly, we present the most relevant theories of emotions, the life cycle of an agentÂżs emotions, and how emotions have been included through computational models in multiagent systems. Next, we present an analysis of proposals that integrate emotions and norms in multiagent systems. From this analysis, four relationships are detected between norms and emotions, which we analyze in detail and discuss how these relationships have been tackled in the reviewed proposals. Finally, we present a proposal for an abstract architecture of a Normative Emotional Agent that covers these four norm-emotion relationships.This work was supported by the Spanish Government project TIN2017-89156- R, the Generalitat Valenciana project PROMETEO/2018/002 and the Spanish Goverment PhD Grant PRE2018-084940.Argente, E.; Del Val, E.; PĂ©rez-GarcĂ­a, D.; Botti Navarro, VJ. (2022). Normative Emotional Agents: a viewpoint paper. IEEE Transactions on Affective Computing. 13(3):1254-1273. https://doi.org/10.1109/TAFFC.2020.3028512S1254127313

    Normative Multi-Agent Organizations: Modeling, Support and Control, Draft Version

    Get PDF
    http://drops.dagstuhl.de/opus/volltexte/2007/902/pdf/07122.BoissierOlivier.Paper.902.pdfInternational audienceIn the last years, social and organizational aspects of agency have become a major issue in multi-agent systems' research. Recent applications of MAS enforce the need of using these aspects in order to ensure some social order within these systems. Tools to control and regulate the overall functioning of the system are needed in order to enforce global laws on the autonomous agents operating in it. This paper presents a normative organization system composed of a normative organization modeling language MOISEInst used to define the normative organization of a MAS, accompanied with SYNAI, a normative organization implementation architecture which is itself regulated with an explicit normative organization specification

    A note on validity in law and regulatory systems (position paper)

    Get PDF
    The notion of validity fulfils a crucial role in legal theory. The emerging Web 3.0 opens a new landscape where Semantic Web languages, legal ontologies, and the construction of Normative Multiagent Systems are built up to cover new regulatory needs. Conceptual models for complex regulatory systems shape the characteristic features of rules, norms and principles in different ways. This position paper outlines one of such multilayered governance models, designed for the CAPER platform

    The Challenge of Artificial Socio-Cognitive Systems

    Get PDF
    This paper is an invitation to carry out science and engineering for a class of socio-technical systems where individuals Âż who may be human or artificial entities Âż engage in purposeful collective interactions within a shared web-mediated social space. We put forward a characterisation of these systems and introduce some conceptual distinctions that may help to plot the work ahead. In particular, we propose a tripartite view that highlights the interplay between the institutional models that prescribe the behaviour of participants, the corre- sponding implementation of these prescriptions and the actual performance of the system. Building on this tripartite view we explore the problem of developing a conceptual framework for modelling this type of systems and how that framework can be supported by technological artefacts that implement the resulting models. The last section of this position paper is a list of challenges that we believe are worth facing. This work draws upon the contributions that the MAS community has made to the understanding and realization of the concepts of coordination, norms and institutions from an organisational perspective.The authors wish to acknowledge the support of SINTELNET (FET Open Coordinated Action FP7-ICT-2009-C Project No. 286370) in the writing of this paper.Peer Reviewe

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Get PDF
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior
    • 

    corecore