35,686 research outputs found

    OperA/ALIVE/OperettA

    Get PDF
    Comprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).Peer ReviewedPostprint (author's final draft

    A Generic Agent Organisation Framework For Autonomic Systems

    No full text
    Autonomic computing is being advocated as a tool for managing large, complex computing systems. Specifically, self-organisation provides a suitable approach for developing such autonomic systems by incorporating self-management and adaptation properties into large-scale distributed systems. To aid in this development, this paper details a generic problem-solving agent organisation framework that can act as a modelling and simulation platform for autonomic systems. Our framework describes a set of service-providing agents accomplishing tasks through social interactions in dynamically changing organisations. We particularly focus on the organisational structure as it can be used as the basis for the design, development and evaluation of generic algorithms for self-organisation and other approaches towards autonomic systems

    Can models of agents be transferred between different areas?

    Get PDF
    One of the main reasons for the sustained activity and interest in the field of agent-based systems, apart from the obvious recognition of its value as a natural and intuitive way of understanding the world, is its reach into very many different and distinct fields of investigation. Indeed, the notions of agents and multi-agent systems are relevant to fields ranging from economics to robotics, in contributing to the foundations of the field, being influenced by ongoing research, and in providing many domains of application. While these various disciplines constitute a rich and diverse environment for agent research, the way in which they may have been linked by it is a much less considered issue. The purpose of this panel was to examine just this concern, in the relationships between different areas that have resulted from agent research. Informed by the experience of the participants in the areas of robotics, social simulation, economics, computer science and artificial intelligence, the discussion was lively and sometimes heated

    Measuring Causal Responsibility in Multi-Agent Spatial Interactions with Feasible Action-Space Reduction

    Full text link
    Modelling causal responsibility in multi-agent spatial interactions is crucial for safety and efficiency of interactions of humans with autonomous agents. However, current formal metrics and models of responsibility either lack grounding in ethical and philosophical concepts of responsibility, or cannot be applied to spatial interactions. In this work we propose a metric of causal responsibility which is tailored to multi-agent spatial interactions, for instance interactions in traffic. In such interactions, a given agent can, by reducing another agent's feasible action space, influence the latter. Therefore, we propose feasible action space reduction (FeAR) as a metric for causal responsibility among agents. Specifically, we look at ex-post causal responsibility for simultaneous actions. We propose the use of Moves de Rigueur - a consistent set of prescribed actions for agents - to model the effect of norms on responsibility allocation. We apply the metric in a grid world simulation for spatial interactions and show how the actions, contexts, and norms affect the causal responsibility ascribed to agents. Finally, we demonstrate the application of this metric in complex multi-agent interactions. We argue that the FeAR metric is a step towards an interdisciplinary framework for quantifying responsibility that is needed to ensure safety and meaningful human control in human-AI systems

    Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents

    Get PDF
    Socionics is an interdisciplinary approach with the objective to use sociological knowledge about the structures, mechanisms and processes of social interaction and social communication as a source of inspiration for the development of multi-agent systems, both for the purposes of engineering applications and of social theory construction and social simulation. The approach has been spelled out from 1998 on within the Socionics priority program funded by the German National research foundation. This special issue of the JASSS presents research results from five interdisciplinary projects of the Socionics program. The introduction gives an overview over the basic ideas of the Socionics approach and summarizes the work of these projects.Socionics, Sociology, Multi-Agent Systems, Artificial Social Systems, Hybrid Systems, Social Simulation
    corecore