72 research outputs found

    Finding an Evolutionarily Stable Strategy in Agent Reputation and Trust (ART) 2007 Competition

    Get PDF
    Proceedings of: 23rd International Conference on Industrial Engineering and Other Applications of Applied intelligent Systems, IEA/AIE 2010, Cordoba, Spain, June 1-4, 2010Our proposal is to apply a Game Theoretic approach to the games played in Agent Reputation and Trust Final Competitions. Using such testbed, three international competitions were successfully carried out jointly with the last AAMAS international Conferences. The corresponding way to define the winner of such competitions was to run a game with all the participants (16). Our point is that such game does not represent a complete way to determine the best trust/reputation strategy, since it is not proved that such strategy is evolutionarily stable. Specifically we prove that when the strategy of the winner of the two first international competitions (2006 and 2007) becomes dominant, it is defeated by other participant trust strategies. Then we found out (through a repeated game definition) the right equilibrium of trust strategies that is evolutionarily stable. This kind of repeated game has to be taken into account in the evaluation of trust strategies, and this conclusion would improve the way trust strategies have to be compared.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/ TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.Publicad

    An Extension of a Fuzzy Reputation Agent Trust Model (AFRAS) in the ART Testbed

    Get PDF
    With the introduction of web services, users require an automated way of determining their reliability and even their matching to personal and subjective preferences. Therefore, trust modelling of web services, managed in an autonomous way by intelligent agents, is a challenging and relevant issue. Due to the dynamic and distributed nature of web services, recommendations of web services from third parties may also play an important role to build and update automated trust models. In this context, the agent reputation and trust (ART) testbed has been used to compare trust models in three international competitions. The testbed runs locally and defines an ART appraisal domain with a simulation engine, although the trust models may be applied to any kind of automated and remote services, such as web services. Our previous works proposed an already-published trust model called AFRAS that used fuzzy sets to represent reputation of service providers and of recommenders of such services. In this paper we describe the extension required in the trust model to participate in these competitions. The extension consists of a trust strategy that applies the AFRAS trust model to the ART testbed concepts and protocols. An implementation of this extension of AFRAS trust model has participated in the (Spanish and International) 2006 ART competitions. Using this ART platform and some of the agents who participated, we executed a set of ART games to evaluate the relevance of trust strategy over trust model, and the advantage of using fuzzy representation of trust and reputation.This work was supported in part by projects CICYT TIN2008-06742-C02-02-TSI, CICYT-TEC2008-06732-C02-02-TEC, SINPROB, CAM MADRINET S-505-TIC-0255 and DPS2008-07029-C02-02.Publicad

    The Design and Implementation of a Testbed for Comparative Game AI Studies

    Get PDF
    An essential component of realism in video games isthe behavior exhibited by the non-player character (NPC) agentsin the game. Most development efforts employ a single artificialintelligence (AI) method to determine NPC agent behavior duringgameplay. This paper describes an NPC AI testbed underdevelopment which will allow for a variety of AI methods tobe compared under simulated gameplay conditions. Two squadsof NPC agents are pitted against each other in a game scenario.Multiple games using the starting same AI assignments will forman epoch. The testbed allows for the testing of a variety of AImethods in three dimensions. Individual agents can be assigneddifferent AI methods. Individual agents can use different AImethods at different times during the game. And finally, theAI used by one type of agent can be made to differ from theAI used by another agent type. Extensive data is collected forall agent actions in all games played in an epoch. This data willform the basis of the comparative analysis

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    The ART of IAM: The Winning Strategy for the 2006 Competition

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for others, may betray that trust by not performing the actions as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. This situation has led to the development of a number of trust and reputation models, which aim to facilitate an agent's decision making in the face of uncertainty regarding the behaviour of its peers. However, these multifarious models employ a variety of different representations of trust between agents, and measure performance in many different ways. This has made it hard to adequately evaluate the relative properties of different models, raising the need for a common platform on which to compare competing mechanisms. To this end, the ART Testbed Competition has been proposed, in which agents using different trust models compete against each other to provide services in an open marketplace. In this paper, we present the winning strategy for this competition in 2006, provide an analysis of the factors that led to this success, and discuss lessons learnt from the competition about issues of trust in multiagent systems in general. Our strategy, IAM, is Intelligent (using statistical models for opponent modelling), Abstemious (spending its money parsimoniously based on its trust model) and Moral (providing fair and honest feedback to those that request it)

    Network-aware Evaluation Environment for Reputation Systems

    Get PDF
    Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation

    Safeguarding E-Commerce against Advisor Cheating Behaviors: Towards More Robust Trust Models for Handling Unfair Ratings

    Full text link
    In electronic marketplaces, after each transaction buyers will rate the products provided by the sellers. To decide the most trustworthy sellers to transact with, buyers rely on trust models to leverage these ratings to evaluate the reputation of sellers. Although the high effectiveness of different trust models for handling unfair ratings have been claimed by their designers, recently it is argued that these models are vulnerable to more intelligent attacks, and there is an urgent demand that the robustness of the existing trust models has to be evaluated in a more comprehensive way. In this work, we classify the existing trust models into two broad categories and propose an extendable e-marketplace testbed to evaluate their robustness against different unfair rating attacks comprehensively. On top of highlighting the robustness of the existing trust models for handling unfair ratings is far from what they were claimed to be, we further propose and validate a novel combination mechanism for the existing trust models, Discount-then-Filter, to notably enhance their robustness against the investigated attacks

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run

    La confianza y la reputación en los sistemas multiagente

    Get PDF
    Después de una introducción a los conceptos básicos, presentamos brevemente diversos modelos de confianza y reputación que han sido desarrollados por grupos de investigación en el ámbito de los países de habla hispana (en muchos casos en colaboración con grupos de otros países) y que son a su vez referentes a nivel internacional. Aunque lejos de ser exhaustiva, creemos que la selección de modelos ilustra adecuadamente tanto la problemática como las soluciones que se están aplicando actualmente en este campo.Publicad

    Evolutionary-inspired approach to compare trust models in agent simulations

    Get PDF
    In many dynamic open systems, agents have to interact with one another to achieve their goals. These interactions pose challenges in relation to the trust modeling of agents which aim to facilitate an agent's decision making regarding the uncertainty of the behaviour of its peers. A lot of literature has focused on describing trust models, but less on evaluating and comparing them. The most extensive way to evaluate trust models is executing simulations with different conditions and a given combination of different types of agents (honest, altruist, etc.). Trust models are then compared according to efficiency, speed of convergence, adaptability to sudden changes, etc. Our opinion is that such evaluation measures do not represent a complete way to determine the best trust model, since they do not include testing which one is evolutionarily stable. Our contribution is the definition of a new way to compare trust models observing their ability to become dominant. It consists of finding out the right equilibrium of trust models in a multiagent system that is evolutionarily stable, and then observing which agent became dominant. We propose a sequence of simulations where evolution is implemented assuming that the worst agent in a simulation would replace its trust model with the best one in such simulation. Therefore the ability to become dominant could be an interesting feature for any trust model. Testing this ability through this evolutionary-inspired approach is then useful to compare and evaluate trust models in agent systems. Specifically we have applied our evaluation method to the Agent Reputation and Trust competitions held at 2006, 2007 and 2008 AAMAS conferences. We observe then that the resulting ranking of comparing the agents ability of becoming dominant is different from the official one where the winner was decided running a game with a representative of all participants several times.This work was supported in part by projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485
    corecore