9 research outputs found

    An Extension of a Fuzzy Reputation Agent Trust Model (AFRAS) in the ART Testbed

    Get PDF
    With the introduction of web services, users require an automated way of determining their reliability and even their matching to personal and subjective preferences. Therefore, trust modelling of web services, managed in an autonomous way by intelligent agents, is a challenging and relevant issue. Due to the dynamic and distributed nature of web services, recommendations of web services from third parties may also play an important role to build and update automated trust models. In this context, the agent reputation and trust (ART) testbed has been used to compare trust models in three international competitions. The testbed runs locally and defines an ART appraisal domain with a simulation engine, although the trust models may be applied to any kind of automated and remote services, such as web services. Our previous works proposed an already-published trust model called AFRAS that used fuzzy sets to represent reputation of service providers and of recommenders of such services. In this paper we describe the extension required in the trust model to participate in these competitions. The extension consists of a trust strategy that applies the AFRAS trust model to the ART testbed concepts and protocols. An implementation of this extension of AFRAS trust model has participated in the (Spanish and International) 2006 ART competitions. Using this ART platform and some of the agents who participated, we executed a set of ART games to evaluate the relevance of trust strategy over trust model, and the advantage of using fuzzy representation of trust and reputation.This work was supported in part by projects CICYT TIN2008-06742-C02-02-TSI, CICYT-TEC2008-06732-C02-02-TEC, SINPROB, CAM MADRINET S-505-TIC-0255 and DPS2008-07029-C02-02.Publicad

    Evolutionary-inspired approach to compare trust models in agent simulations

    Get PDF
    In many dynamic open systems, agents have to interact with one another to achieve their goals. These interactions pose challenges in relation to the trust modeling of agents which aim to facilitate an agent's decision making regarding the uncertainty of the behaviour of its peers. A lot of literature has focused on describing trust models, but less on evaluating and comparing them. The most extensive way to evaluate trust models is executing simulations with different conditions and a given combination of different types of agents (honest, altruist, etc.). Trust models are then compared according to efficiency, speed of convergence, adaptability to sudden changes, etc. Our opinion is that such evaluation measures do not represent a complete way to determine the best trust model, since they do not include testing which one is evolutionarily stable. Our contribution is the definition of a new way to compare trust models observing their ability to become dominant. It consists of finding out the right equilibrium of trust models in a multiagent system that is evolutionarily stable, and then observing which agent became dominant. We propose a sequence of simulations where evolution is implemented assuming that the worst agent in a simulation would replace its trust model with the best one in such simulation. Therefore the ability to become dominant could be an interesting feature for any trust model. Testing this ability through this evolutionary-inspired approach is then useful to compare and evaluate trust models in agent systems. Specifically we have applied our evaluation method to the Agent Reputation and Trust competitions held at 2006, 2007 and 2008 AAMAS conferences. We observe then that the resulting ranking of comparing the agents ability of becoming dominant is different from the official one where the winner was decided running a game with a representative of all participants several times.This work was supported in part by projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485

    Merging plans with incomplete knowledge about actions and goals through an agent-based reputation system

    Get PDF
    In This Paper, We Propose And Compare Alternative Ways To Merge Plans Formed Of Sequences Of Actions With Unknown Similarities Between The Goals And Actions. Plans Are Formed Of Actions And Are Executed By Several Operator Agents, Which Cooperate Through Recommendations. The Operator Agents Apply The Plan Actions To Passive Elements (Which We Call Node Agents) That Will Require Additional Future Executions Of Other Plans After Some Time. The Ignorance Of The Similarities Between The Plan Actions And The Goals Justifies The Use Of A Distributed Recommendation System To Produce A Useful Plan For A Given Operator Agent To Apply Towards A Certain Goal. This Plan Is Generated From The Known Results Of Previous Executions Of Various Plans By Other Operator Agents. Here, We Present The General Framework Of Execution (The Agent System) And The Results Of Applying Various Merging Algorithms To This Problem.This work was supported in part by Project MINECO TEC2017-88048-C2-2-

    Multiple criteria decision making in application layer networks

    Get PDF
    This work is concerned with the conduct of MCDM by intelligent agents trading commodities in ALNs. These agents consider trustworthiness in their course of negotiation and select offers with respect to product price and seller reputation. --Grid Computing

    Modelo de privacidad digital en inteligencia ambiental basado en sistemas multiagente

    Get PDF
    El gran desarrollo de las Tecnologías de la Información y la Comunicación utilizadas en los dominios de aplicación de la Inteligencia Ambiental (AmI), ocurrido en la última década, nos sitúa inmersos en los llamados entornos inteligentes, rodeados de una extensa variedad de dispositivos y tecnologías con capacidad de adquirir, almacenar y transmitir nuestra información personal. La complejidad y volumen de los sistemas involucrados en las aplicaciones desarrolladas en Inteligencia Ambiental hacen que seamos incapaces de conocer y controlar toda la información que estos sistemas son capaces de adquirir y transmitir, tanto si esta información ha sido proporcionada por nosotros directamente, como si ha sido adquirida de forma indirecta por otros sistemas sin nuestro conocimiento; lo que pone en riesgo la protección de nuestro derecho a la privacidad. Considerando que, el principal objetivo de la Inteligencia Ambiental es el de ofrecernos diferentes tipos de servicios personalizados en cualquier lugar y en todo momento, facilitándonos así la realización de nuestras actividades cotidianas, se ha llevado a cabo un estudio sobre las aplicaciones desarrolladas en AmI, que ha revelado la necesidad de incluir las cuestiones de tipo social y ético en el diseño del AmI, destacando entre ellas la privacidad por ser uno de los derechos fundamentales de las personas, como así queda reflejado en la Declaración Universal de los Derechos Humanos (Artículo 12). Por ello, para el verdadero desarrollo y aceptación de la Inteligencia Ambiental deberá considerarse no solo los aspectos tecnológicos, sino que, resulta fundamental tener en cuenta las implicaciones sociales y éticas. Esta es la idea del concepto “Design by Privacy” que se ha utilizado en la investigación realizada. En base a este concepto, se han establecido las políticas de privacidad del usuario según los dominios de aplicación del AmI. Partiendo de la base de que sean las propias técnicas utilizadas en AmI las que ayuden a proteger nuestra información personal, se han utilizado los agentes de los modelos de confianza como herramienta para determinar los derechos de privacidad que deben cumplir los agentes en sus comunicaciones, y que ha servido para decidir con quién compartimos nuestras opiniones privadas, minimizando de esta forma los riesgos de la privacidad de nuestra información al interaccionar con los servicios ofrecidos por las aplicaciones del AmI. Así pues, el motivo de investigación de esta tesis es el de presentar un Modelo de Privacidad Digital basado en Sistemas Multiagente, que nos ayudará a decidir en quién confiar a la hora de compartir nuestras opiniones privadas. Este modelo ha sido implementado para su validación en el entorno de experimentación del ART testbed (Agent Reputation and Trust), en el que el dominio de aplicación del AmI es el relacionado con la tasación de cuadros o pinturas de arte. Una vez implementada la manera de decidir con quién compartimos nuestra información privada, y con el fin de controlar el cumplimiento de los derechos de privacidad que se han establecido en las comunicaciones entre los agentes, se han formalizado las posibles infracciones sobre los derechos de privacidad utilizando la Institución Electrónica “Islander” como herramienta de especificación de las normas y sanciones correspondientes que deben cumplir los agentes en sus comunicaciones.The great development of Information and Communication Technologies used in the domains of application of Ambient Intelligence, which has taken place in the last decade, places us immersed in intelligent environments surrounded by a wide variety of devices and Technologies with the ability to acquire, store and transmit our personal information. The complexity and volume of the systems involved in the applications developed in Environmental Intelligence mean that we are unable to know and control all the information that these systems are able to acquire and transmit, whether this information has been provided by us directly, or whether it has Been acquired indirectly by other systems without our knowledge; Which puts at risk the protection of our right to privacy. Considering that the main objective of Environmental Intelligence is to offer different types of personalized services in any place and at all times, facilitating us to carry out our daily activities, a study has been carried out on the applications developed in AmI, which has revealed the need to take into account social and ethical issues in the design of the AmI, highlighting among them the privacy as one of the fundamental rights of the people, as reflected in the Universal Declaration of Human Rights (Article 12). For that reason, for the true development and acceptance of Ambient Intelligence, not only the technological aspects must be taken into account, but it is fundamental to consider the social and ethical implications. This is the idea of the concept "Design by Privacy" that has been used on the research carried out. Based on this concept, user privacy policies have been established and should be taken into account in the AmI application domains. Based on the idea that the techniques used in AmI are those that help protect our personal information, the agents with a trust model have been used as a tool to determine the privacy rights that agents must comply with in their communications, and that has served to decide with whom we share our private opinions, thus minimizing the risks of privacy of our information when interacting with the services offered by AmI applications. Therefore, the aim of the research of this thesis is to present a Digital Privacy Model based on Multi-Agent Systems, which will help us to decide who to trust when sharing our private opinions. This model has been implemented for validation in the experimental environment of the ART testbed (Agent Reputation and Trust), in which the domain of the AmI application, is the one related with the evaluation of art pictures. Once the way to decide with whom we share our private information has been implemented, and in order to control the compliance with the privacy rights established in the communications between the agents, possible violations of privacy rights have been formalized using the Electronic Institution "Islander" as a tool for specifying the standards and corresponding sanctions that agents must comply with in their communications.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Jesús García Herrero.- Secretario: Clara Benac Earle.- Vocal: Ana María Bernardos Barboll
    corecore