116 research outputs found

    Deduction for Travel Expenses When Involved with More Than One Business

    Get PDF

    Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions

    Full text link
    The rapid adoption of artificial intelligence (AI) necessitates careful analysis of its ethical implications. In addressing ethics and fairness implications, it is important to examine the whole range of ethically relevant features rather than looking at individual agents alone. This can be accomplished by shifting perspective to the systems in which agents are embedded, which is encapsulated in the macro ethics of sociotechnical systems (STS). Through the lens of macro ethics, the governance of systems - which is where participants try to promote outcomes and norms which reflect their values - is key. However, multiple-user social dilemmas arise in an STS when stakeholders of the STS have different value preferences or when norms in the STS conflict. To develop equitable governance which meets the needs of different stakeholders, and resolve these dilemmas in satisfactory ways with a higher goal of fairness, we need to integrate a variety of normative ethical principles in reasoning. Normative ethical principles are understood as operationalizable rules inferred from philosophical theories. A taxonomy of ethical principles is thus beneficial to enable practitioners to utilise them in reasoning. This work develops a taxonomy of normative ethical principles which can be operationalized in the governance of STS. We identify an array of ethical principles, with 25 nodes on the taxonomy tree. We describe the ways in which each principle has previously been operationalized, and suggest how the operationalization of principles may be applied to the macro ethics of STS. We further explain potential difficulties that may arise with each principle. We envision this taxonomy will facilitate the development of methodologies to incorporate ethical principles in reasoning capacities for governing equitable STS

    Desen: Specification of Sociotechnical Systems via Patterns of Regulation and Control

    Get PDF
    We address the problem of engineering a sociotechnical system (STS) with respect to its stakeholders’ requirements. We motivate a two-tier STS conception comprising a technical tier that provides control mechanisms and describes what actions are allowed by the software components, and a social tier that characterizes the stakeholders’ expectations of each other in terms of norms. We adopt agents as computational entities, each representing a different stakeholder. Unlike previous approaches, our framework, Desen, incorporates the social dimension into the formal verification process. Thus, Desen supports agents potentially violating applicable norms—a consequence of their autonomy. In addition to requirements verification, Desen supports refinement of STS specifications via design patterns to meet stated requirements. We evaluate Desen at three levels. We illustrate how Desen carries out refinement via the application of patterns on a hospital emergency scenario. We show via a human-subject study that a design process based on our patterns is helpful for participants who are inexperienced in conceptual modeling and norms. We provide an agent-based environment to simulate the hospital emergency scenario to compare STS specifications (including participant solutions from the human-subject study) with metrics indicating social welfare and norm compliance, and other domain dependent metrics

    Fostering Multi-Agent Cooperation through Implicit Responsibility

    Get PDF
    For integration in real-world environments, it is critical that autonomousagents are capable of behaving responsibly while working alongside humans andother agents. Existing frameworks of responsibility for multi-agent systems typically model responsibilities in terms of adherence to explicit standards. Suchframeworks do not reflect the often unstated, or implicit, way in which responsibilities can operate in the real world. We introduce the notion of implicit responsibilities: self-imposed standards of responsible behaviour that emerge and guideindividual decision-making without any formal or explicit agreement.We propose that incorporating implicit responsibilities into multi-agent learningand decision-making is a novel approach for fostering mutually beneficial cooperative behaviours. As a preliminary investigation, we present a proof-of-conceptapproach for integrating implicit responsibility into independent reinforcementlearning agents through reward shaping. We evaluate our approach through simulation experiments in an environment characterised by conflicting individual andgroup incentives. Our findings suggest that societies of agents modelling implicitresponsibilities can learn to cooperate more quickly, and achieve greater returnscompared to baseline

    Understanding dynamics of polarization via multiagent social simulation

    Get PDF
    It is widely recognized that the Web contributes to user polarization, and such polarization affects not just politics but also peoples’ stances about public health, such as vaccination. Understanding polarization in social networks is challenging because it depends not only on user attitudes but also their interactions and exposure to information. We adopt Social Judgment Theory to operationalize attitude shift and model user behavior based on empirical evidence from past studies. We design a social simulation to analyze how content sharing affects user satisfaction and polarization in a social network. We investigate the influence of varying tolerance in users and selectively exposing users to congenial views. We find that (1) higher user tolerance slows down polarization and leads to lower user satisfaction; (2) higher selective exposure leads to higher polarization and lower user reach; and (3) both higher tolerance and higher selective exposure lead to a more homophilic social network
    • â€Ķ
    corecore