429 research outputs found

    From Agent Game Protocols to Implementable Roles

    Get PDF
    kostas.stathis-at-cs.rhul.ac.uk Abstract. We present a formal framework for decomposing agent interaction protocols to the roles their participants should play. The framework allows an Authority Agent that knows a protocol to compute the protocol’s roles so that it can allocate them to interested parties. We show how the Authority Agent can use the role descriptions to identify problems with the protocol and repair it on the fly, to ensure that participants will be able to implement their role requirements without compromising the protocol’s interactions. Our representation of agent interaction protocols is a game-based one and the decomposition of a game protocol into its constituent roles is based upon the branching bisimulation equivalence reduction of the game. The work extends our previous work on using games to admit agents in an artificial society by checking their competence according to the society rules. The applicability of the overall approach is illustrated by showing how to decompose the NetBill protocol into its roles. We also show how to automatically repair the interactions of a protocol that cannot be implemented in its original form.

    Organized Anonymous Agents

    Get PDF
    Anonymity can be of great importance in distributed agent applications such as e-commerce & auctions. This paper proposes and analyzes a new approach for organized anonymity of agents based on the use of pseudonyms. A novel naming scheme is presented that can be used by agent platforms to provide automatic anonymity for all agents on its platform, or, alternatively, to provide anonymity on demand. The paper also introduces a new technique, based on the use of handles, that can be fully integrated in an agent platform. Performance measures for an anonymity service implemented for the AgentScape platform provides some insight in the overhead involved. 1

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
    corecore