1,665 research outputs found

    The ART of IAM: The Winning Strategy for the 2006 Competition

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for others, may betray that trust by not performing the actions as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. This situation has led to the development of a number of trust and reputation models, which aim to facilitate an agent's decision making in the face of uncertainty regarding the behaviour of its peers. However, these multifarious models employ a variety of different representations of trust between agents, and measure performance in many different ways. This has made it hard to adequately evaluate the relative properties of different models, raising the need for a common platform on which to compare competing mechanisms. To this end, the ART Testbed Competition has been proposed, in which agents using different trust models compete against each other to provide services in an open marketplace. In this paper, we present the winning strategy for this competition in 2006, provide an analysis of the factors that led to this success, and discuss lessons learnt from the competition about issues of trust in multiagent systems in general. Our strategy, IAM, is Intelligent (using statistical models for opponent modelling), Abstemious (spending its money parsimoniously based on its trust model) and Moral (providing fair and honest feedback to those that request it)

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run

    Network-aware Evaluation Environment for Reputation Systems

    Get PDF
    Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation

    On the Simulation of Global Reputation Systems

    Get PDF
    Reputation systems evolve as a mechanism to build trust in virtual communities. In this paper we evaluate different metrics for computing reputation in multi-agent systems. We present a formal model for describing metrics in reputation systems and show how different well-known global reputation metrics are expressed by it. Based on the model a generic simulation framework for reputation metrics was implemented. We used our simulation framework to compare different global reputation systems to find their strengths and weaknesses. The strength of a metric is measured by its resistance against different threat-models, i.e. different types of hostile agents. Based on our results we propose a new metric for reputation systems.Reputation System, Trust, Formalization, Simulation

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    DipGame: A challenging negotiation testbed

    Get PDF
    There is a chronic lack of shared application domains to test advanced research models and agent negotiation architectures in Multiagent Systems. In this paper we introduce a friendly testbed for that purpose. The testbed is based on The Diplomacy Game where negotiation and the relationships between players play an essential role. The testbed profits from the existence of a large community of human players that know the game and can easily provide data for experiments. We explain the infrastructure in the paper and make it freely available to the AI community. © 2011 Elsevier Ltd. All rights reserved.Research supported by the Agreement Technologies CONSOLIDER project under contract CSD2007-0022 and INGENIO 2010, by the Agreement Technologies COST Action, IC0801, and by the Generalitat de Catalunya under the grant 2009-SGR-1434.Peer Reviewe
    corecore