31,240 research outputs found

    Algebras for Agent Norm-Regulation

    Full text link
    An abstract architecture for idealized multi-agent systems whose behaviour is regulated by normative systems is developed and discussed. Agent choices are determined partially by the preference ordering of possible states and partially by normative considerations: The agent chooses that act which leads to the best outcome of all permissible actions. If an action is non-permissible depends on if the result of performing that action leads to a state satisfying a condition which is forbidden, according to the norms regulating the multi-agent system. This idea is formalized by defining set-theoretic predicates characterizing multi-agent systems. The definition of the predicate uses decision theory, the Kanger-Lindahl theory of normative positions, and an algebraic representation of normative systems.Comment: 25 page

    Homo Socionicus: a Case Study of Simulation Models of Norms

    Get PDF
    This paper describes a survey of normative agent-based social simulation models. These models are examined from the perspective of the foundations of social theory. Agent-based modelling contributes to the research program of methodological individualism. Norms are a central concept in the role theoretic concept of action in the tradition of Durkheim and Parsons. This paper investigates to what extend normative agent-based models are able to capture the role theoretic concept of norms. Three methodological core problems are identified: the question of norm transmission, normative transformation of agents and what kind of analysis the models contribute. It can be shown that initially the models appeared only to address some of these problems rather than all of them simultaneously. More recent developments, however, show progress in that direction. However, the degree of resolution of intra agent processes remains too low for a comprehensive understanding of normative behaviour regulation.Norms, Normative Agent-Based Social Simulation, Role Theory, Methodological Individualism

    Academic Panel: Can Self-Managed Systems be trusted?

    Get PDF
    Trust can be defined as to have confidence or faith in; a form of reliance or certainty based on past experience; to allow without fear; believe; hope: expect and wish; and extend credit to. The issue of trust in computing has always been a hot topic, especially notable with the proliferation of services over the Internet, which has brought the issue of trust and security right into the ordinary home. Autonomic computing brings its own complexity to this. With systems that self-manage, the internal decision making process is less transparent and the ‘intelligence’ possibly evolving and becoming less tractable. Such systems may be used from anything from environment monitoring to looking after Granny in the home and thus the issue of trust is imperative. To this end, we have organised this panel to examine some of the key aspects of trust. The first section discusses the issues of self-management when applied across organizational boundaries. The second section explores predictability in self-managed systems. The third part examines how trust is manifest in electronic service communities. The final discussion demonstrates how trust can be integrated into an autonomic system as the core intelligence with which to base adaptivity choices upon

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Multi-task Deep Reinforcement Learning with PopArt

    Full text link
    The reinforcement learning community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequential-decision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent's updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab

    A Cognitive Model for Conversation

    Get PDF
    International audienceThis paper describes a symbolic model of rational action and decision making to support analysing dialogue. The model approximates principles of behaviour from game theory, and its proof theory makes Gricean principles of cooperativity derivable when the agents’ preferences align
    • 

    corecore