82,604 research outputs found

    Programming Groups of Rational Agents

    Full text link
    Abstract. In this paper, we consider the problem of effectively pro-gramming groups of agents. These groups should capture structuring mechanisms common in multi-agent systems, such as teams, cooperative groups, and organisations. Not only should individual agents be dynamic and evolving, but the groups in which the agents occur must be open, flexible and capable of similar evolution and restructuring. We enable the description and implementation of such groups by providing an extension to our previous work on programming languages for agent-based systems based on executable temporal and modal logics. With such formalism as a basis, we consider the grouping aspects within multi-agent systems. In particular, we describe how this logic-based approach to grouping has been implemented in Java and consider how this language can be used for developing multi-agent systems.

    Trust-Based Mechanisms for Robust and Efficient Task Allocation in the Presence of Execution Uncertainty

    Get PDF
    Vickrey-Clarke-Groves (VCG) mechanisms are often used to allocate tasks to selfish and rational agents. VCG mechanisms are incentive-compatible, direct mechanisms that are efficient (i.e. maximise social utility) and individually rational (i.e. agents prefer to join rather than opt out). However, an important assumption of these mechanisms is that the agents will always successfully complete their allocated tasks. Clearly, this assumption is unrealistic in many real-world applications where agents can, and often do, fail in their endeavours. Moreover, whether an agent is deemed to have failed may be perceived differently by different agents. Such subjective perceptions about an agent’s probability of succeeding at a given task are often captured and reasoned about using the notion of trust. Given this background, in this paper, we investigate the design of novel mechanisms that take into account the trust between agents when allocating tasks. Specifically, we develop a new class of mechanisms, called trust-based mechanisms, that can take into account multiple subjective measures of the probability of an agent succeeding at a given task and produce allocations that maximise social utility, whilst ensuring that no agent obtains a negative utility. We then show that such mechanisms pose a challenging new combinatorial optimisation problem (that is NP-complete), devise a novel representation for solving the problem, and develop an effective integer programming solution (that can solve instances with about 2×105 possible allocations in 40 seconds).

    An argumentative formalism for implementing rational agents

    Get PDF
    The design of intelligent agents is a key issue for many applications. Although there is no universally accepted de nition of intelligence, a notion of rational agency has been proposed as an alternative for the characterization of intelligent agency. Modeling the epistemic state of a rational agent is one of the most di cult tasks to be addressed in the design process, and its complexity is directly related to the formalism used for representing the knowledge of the agent. This paper presents the main features of Observation-based Defeasible Logic Programming (ODeLP), a formalism tailored for agents that perform defeasible reasoning in dynamic domains. Most agents must have a timely interaction with their environment. Since the cognitive process of rational agents is complex and computationally expensive, this interaction is particularly hard to achieve. To solve this issue, we propose an optimization of the inference process in ODeLP based on the use of precompiled knowledge. This optimization can be e ciently implemented using concepts from pattern matching algorithms.Eje: Sistemas inteligentesRed de Universidades con Carreras en Informática (RedUNCI

    Iterated mutual observation with genetic programming

    Full text link
    "This paper introduces a simple model of interacting agents that learn to predict each other. For learning to predict the other's intended action we apply genetic programming. The strategy of an agent is rational and fixed. It does not change like in classical iterated prisoners dilemma models. Furthermore the number of actions an agent can choose from is infinite. Preliminary simulation results are presented. They show that by varying the population size of genetic programming, different learning characteristics can easily be achieved, which lead to quite different communication patterns." (author's abstract

    Kleine Gaben für große Götter

    Get PDF
    We are working on the development and design of an approach to agents that can reason, react to the environment and are able to update their own knowledge as a result of new incoming information. In the resulting framework, rational, reactive agents can dynamically change their own knowledge bases as well as their own goals. An agent can make observations, learn new facts and new rules from the environment, and then update its knowledge accordingly. The knowledge base of an agent and its updating mechanism has been implemented in Logic Programming. The agent’s framework is implemented in Java. This aim of this thesis is to design and implement an architecture of a reactive, rational agent in both Java and Prolog and to test the interaction between the rational part and the reactive part of the agent. The agent architecture is called RR-agent and consists of six more or less components, four implemented in Java and the other two are implemented in XSB Prolog. The result of this thesis is the ground for the paper “An architecture of a rational, reactive agent” by P. DellAcqua, M. Engberg, L.M. Pereira that has been submitted

    Rational physical agent reasoning beyond logic

    No full text
    The paper addresses the problem of defining a theoretical physical agent framework that satisfies practical requirements of programmability by non-programmer engineers and at the same time permitting fast realtime operation of agents on digital computer networks. The objective of the new framework is to enable the satisfaction of performance requirements on autonomous vehicles and robots in space exploration, deep underwater exploration, defense reconnaissance, automated manufacturing and household automation

    To boldly go:an occam-π mission to engineer emergence

    Get PDF
    Future systems will be too complex to design and implement explicitly. Instead, we will have to learn to engineer complex behaviours indirectly: through the discovery and application of local rules of behaviour, applied to simple process components, from which desired behaviours predictably emerge through dynamic interactions between massive numbers of instances. This paper describes a process-oriented architecture for fine-grained concurrent systems that enables experiments with such indirect engineering. Examples are presented showing the differing complex behaviours that can arise from minor (non-linear) adjustments to low-level parameters, the difficulties in suppressing the emergence of unwanted (bad) behaviour, the unexpected relationships between apparently unrelated physical phenomena (shown up by their separate emergence from the same primordial process swamp) and the ability to explore and engineer completely new physics (such as force fields) by their emergence from low-level process interactions whose mechanisms can only be imagined, but not built, at the current time
    corecore