49,472 research outputs found

    Normative agent reasoning in dynamic societies

    Get PDF
    Several innovative software applications such as those required by ambient intelligence, the semantic grid, e-commerce and e-marketing, can be viewed as open societies of heterogeneous and self-interested agents in which social order is achieved through norms. For agents to participate in these kinds of societies, it is enough that they are able to represent and fulfill norms, and to recognise the authority of certain agents. However, to voluntarily be part of a society or to voluntarily leave it, other characteristics of agents are needed. To find these characteristics we observe that on the one hand, autonomous agents have their own goals and, sometimes, they act on behalf of others whose goals must be satisfied. On the other, we observe that by being members, agents must comply with some norms that can be in clear conflict with their goals. Consequently, agents must evaluate the positive or negative effects of norms on their goals before making a decision concerning their social behaviour. Providing a model of autonomous agents that undertake this kind of norm reasoning is the aim of this paper

    A Model of Dynamic Resource Allocation in Workflow Systems

    Get PDF
    Current collaborative work environments are characterized by dynamically changing organizational structures. Although there have been several efforts to refine work distribution, especially in workflow management, most literature assumes a static database approach which captures organizational roles, groups and hierarchies and implements a dynamic roles based agent assignment protocol. However, in practice only partial information may be available for organizational models, and in turn a large number of exceptions may emerge at the time of work assignment. In this paper we present an organizational model based on a policy based normative system. The model is based on a combination of an intentional logic of agency and a flexible, but computationally feasible, non-monotonic formalism (Defeasible Logic). Although this paper focuses on the model specification, the proposed approach to modelling agent societies provides a means of reasoning with partial and unpredictable information as is typical of organizational agents in workflow system

    OperA/ALIVE/OperettA

    Get PDF
    Comprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).Peer ReviewedPostprint (author's final draft

    The Current State of Normative Agent-Based Systems

    Get PDF
    Recent years have seen an increase in the application of ideas from the social sciences to computational systems. Nowhere has this been more pronounced than in the domain of multiagent systems. Because multiagent systems are composed of multiple individual agents interacting with each other many parallels can be drawn to human and animal societies. One of the main challenges currently faced in multiagent systems research is that of social control. In particular, how can open multiagent systems be configured and organized given their constantly changing structure? One leading solution is to employ the use of social norms. In human societies, social norms are essential to regulation, coordination, and cooperation. The current trend of thinking is that these same principles can be applied to agent societies, of which multiagent systems are one type. In this article, we provide an introduction to and present a holistic viewpoint of the state of normative computing (computational solutions that employ ideas based on social norms.) To accomplish this, we (1) introduce social norms and their application to agent-based systems; (2) identify and describe a normative process abstracted from the existing research; and (3) discuss future directions for research in normative multiagent computing. The intent of this paper is to introduce new researchers to the ideas that underlie normative computing and survey the existing state of the art, as well as provide direction for future research.Norms, Normative Agents, Agents, Agent-Based System, Agent-Based Simulation, Agent-Based Modeling

    Automating decision making to help establish norm-based regulations

    Full text link
    Norms have been extensively proposed as coordination mechanisms for both agent and human societies. Nevertheless, choosing the norms to regulate a society is by no means straightforward. The reasons are twofold. First, the norms to choose from may not be independent (i.e, they can be related to each other). Second, different preference criteria may be applied when choosing the norms to enact. This paper advances the state of the art by modeling a series of decision-making problems that regulation authorities confront when choosing the policies to establish. In order to do so, we first identify three different norm relationships -namely, generalisation, exclusivity, and substitutability- and we then consider norm representation power, cost, and associated moral values as alternative preference criteria. Thereafter, we show that the decision-making problems faced by policy makers can be encoded as linear programs, and hence solved with the aid of state-of-the-art solvers

    Trust and corruption: escalating social practices?

    Get PDF
    Escalating social practices spread dynamically, as they take hold. They are selffulfilling and contagious. This article examines two central social practices, trust and corruption, which may be characterized as alternative economic lubricants. Corruption can be a considerable instrument of flexibility while trust may be an alternative to vigilance (or a collective regime of sanctions). Rational equilibrium explanations and psychological accounts of trust and corruption are rejected in favour of a model open to multiple feed-backs. Although there can be too much trust and too little corruption, and (unsurprisingly) too little trust and too much corruption, a state is unattainable in which these forces are in balance. Practices of trust alone can form stable equilibria, but it is claimed that such states are undesirable for economic and moral reasons. By contrast, practices of corruption are inherently unstable. Implications for strategies of control in organizational relations are drawn

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
    • ā€¦
    corecore