9 research outputs found

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    Social techniques for effective interactions in open cooperative systems

    No full text
    Distributed systems are becoming increasingly popular, both in academic and commercial communities, because of the functionality they offer for sharing resources among participants of these communities. As individual systems with different purposes and functionalities are developed, and as data of many different kinds are generated, the value to be gained from sharing services with others rather than just personal use, increases dramatically. This, however, is only achievable if participants of open systems cooperate with each other, to ensure the longevity of the system and the richness of available services, and to make decisions about the services they use to ensure that they are of sufficient levels of quality. Moreover, the properties of distributed systems such as openness, dynamism, heterogeneity and resource-bounded providers bring a number of challenges to designing computational entities that cooperate effectively and efficiently. In particular, computational entities must deal with the diversity of available services, the possible resource limitations for service provision, and with finding providers willing to cooperate even in the absence of economic gains. This requires a means not only to provide non-monetary incentives for service providers, but also to account for the level of quality of cooperations, in terms of the quality of provided and received services. In support of this, entities must be capable of selecting among alternative interaction partners, since each will offer distinct properties, which may change due to the dynamism of the environment. With this in mind, our goal is to develop mechanisms to allow effective cooperation between agents operating in systems that are open, dynamic, heterogeneous, and cooperative. Such mechanisms are needed in the context of cooperative applications with services that are free of charge, such as those in bioinformatics. To achieve this, we propose a framework for non-monetary cooperative interactions, which provides non-monetary incentives for service provision and a means to analyse cooperations; an evaluation method, for evaluating dynamic services; a provider selection mechanism, for decision-making over service requests; and a requester selection mechanism, for decision-making over service provision

    Credibilidade e reputação em agentes inteligentes. Aplicação ao comércio electrónico

    Get PDF
    Desde o seu aparecimento, a Internet teve um desenvolvimento e uma taxa de crescimento quase exponencial. Os mercados de comércio electrónico têm vindo a acompanhar esta tendência de crescimento, tornando-se cada vez mais comuns e populares entre comerciantes ou compradores/vendedores de ocasião. A par deste crescimento também foi aumentando a complexidade e sofisticação dos sistemas responsáveis por promover os diferentes mercados. No seguimento desta evolução surgiram os Agentes Inteligentes devido à sua capacidade de encontrar e escolher, de uma forma relativamente eficiente, o melhor negócio, tendo por base as propostas e restrições existentes. Desde a primeira aplicação dos Agentes Inteligentes aos mercados de comércio electrónico que os investigadores desta área, têm tentado sempre auto-superar-se arranjando modelos de Agentes Inteligentes melhores e mais eficientes. Uma das técnicas usadas, para a tentativa de obtenção deste objectivo, é a transferência dos comportamentos Humanos, no que toca a negociação e decisão, para estes mesmos Agentes Inteligentes. O objectivo desta dissertação é averiguar se os Modelos de Avaliação de Credibilidade e Reputação são uma adição útil ao processo de negociação dos Agente Inteligentes. O objectivo geral dos modelos deste tipo é minimizar as situações de fraude ou incumprimento sistemático dos acordos realizados aquando do processo de negociação. Neste contexto, foi proposto um Modelo de Avaliação de Credibilidade e Reputação aplicável aos mercados de comércio electrónico actuais e que consigam dar uma resposta adequada o seu elevado nível de exigência. Além deste modelo proposto também foi desenvolvido um simulador Multi-Agente com a capacidade de simular vários cenários e permitir, desta forma, comprovar a aplicabilidade do modelo proposto. Por último, foram realizadas várias experiências sobre o simulador desenvolvido, de forma a ser possível retirar algumas conclusões para o presente trabalho. Sendo a conclusão mais importante a verificação/validação de que a utilização de mecanismos de credibilidade e reputação são uma mais-valia para os mercado de comércio electrónico.Since its emergence, the Internet has had a development and a growth rate almost exponentially. The markets for electronic commerce have been accompanying almost side-by-side this growth trend, becoming increasingly common and popular among traders and occasional buyers/sellers. With this growth, also the complexity and sophistication of the systems has increased. On the following of these developments came the Intelligent Agents due to their ability to find and choose with a relatively efficient form, the best deal, based on the goal objective and existing restrictions. Since the first application of Intelligent Agents for e-commerce markets that researchers in this area are always trying to overcome themselves by developing better and more efficient Intelligent Agents models. One of the techniques used to attempt to achieve this objective, is the transfer of human behavior, when it comes to negotiating and decision, for these Intelligent Agents. The objective of this dissertation is to evaluate if the Evaluation Models of Trust and Reputation are a useful addition to the negotiation process. The main objective of this type of models is to try to minimize the occurrences of frauds or systematic failure to comply with the agreements reached during the negotiation process. In this context, a Model to Assess Credibility and Reputation applicable to the current e-commerce markets was proposed, which should be capable of responding adequately to these markets known for being highly demanding. Other than this model, a Multi-Agent Simulator capable of simulating several scenarios was also developed, thus making it possible to confirm the applicability of the proposed model. Lastly, several experiments were carried out on the developed simulator so that it is possible to draw some conclusions for this work. The most important conclusion is the verification/validation that credibility and reputation mechanisms are an asset to e-commerce markets

    Trust-based social mechanism to counter deceptive behaviour

    Get PDF
    The actions of an autonomous agent are driven by its individual goals and its knowledge and beliefs about its environment. As agents can be assumed to be selfinterested, they strive to achieve their own interests and therefore their behaviour can sometimes be difficult to predict. However, some behaviour trends can be observed and used to predict the future behaviour of agents, based on their past behaviour. This is useful for agents to minimise the uncertainty of interactions and ensure more successful transactions. Furthermore, uncertainty can originate from malicious behaviour, in the form of collusion, for example. Agents need to be able to cope with this to maximise their benefits and reduce poor interactions with collusive agents. This thesis provides a mechanism to support countering deceptive behaviour by enabling agents to model their agent environment, as well as their trust in the agents they interact with, while using the data they already gather during routine agent interactions. As agents interact with one another to achieve the goals they cannot achieve alone, they gather information for modelling the trust and reputation of interaction partners. The main aim of our trust and reputation model is to enable agents to select the most trustworthy partners to ensure successful transactions, while gathering a rich set of interaction and recommendation information. This rich set of information can be used for modelling the agents' social networks. Decentralised systems allow agents to control and manage their own actions, but this suffers from limiting the agents' view to only local interactions. However, the representation of the social networks helps extend an agent's view and thus extract valuable information from its environment. This thesis presents how agents can build such a model of their agent networks and use it to extract information for analysis on the issue of collusion detection.EThOS - Electronic Theses Online ServiceUniversity of Warwick. Dept. of Computer ScienceGBUnited Kingdo

    Social techniques for effective interactions in open cooperative systems

    Get PDF
    Distributed systems are becoming increasingly popular, both in academic and commercial communities, because of the functionality they offer for sharing resources among participants of these communities. As individual systems with different purposes and functionalities are developed, and as data of many different kinds are generated, the value to be gained from sharing services with others rather than just personal use, increases dramatically. This, however, is only achievable if participants of open systems cooperate with each other, to ensure the longevity of the system and the richness of available services, and to make decisions about the services they use to ensure that they are of sufficient levels of quality. Moreover, the properties of distributed systems such as openness, dynamism, heterogeneity and resource-bounded providers bring a number of challenges to designing computational entities that cooperate effectively and efficiently. In particular, computational entities must deal with the diversity of available services, the possible resource limitations for service provision, and with finding providers willing to cooperate even in the absence of economic gains. This requires a means not only to provide non-monetary incentives for service providers, but also to account for the level of quality of cooperations, in terms of the quality of provided and received services. In support of this, entities must be capable of selecting among alternative interaction partners, since each will offer distinct properties, which may change due to the dynamism of the environment. With this in mind, our goal is to develop mechanisms to allow effective cooperation between agents operating in systems that are open, dynamic, heterogeneous, and cooperative. Such mechanisms are needed in the context of cooperative applications with services that are free of charge, such as those in bioinformatics. To achieve this, we propose a framework for non-monetary cooperative interactions, which provides non-monetary incentives for service provision and a means to analyse cooperations; an evaluation method, for evaluating dynamic services; a provider selection mechanism, for decision-making over service requests; and a requester selection mechanism, for decision-making over service provision.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    The evolution and stability of cooperative traits

    Full text link

    The Evolution and Stability of Cooperative Traits

    No full text
    Recent works in multi-agent systems have identified agent behaviors that can develop and sustain mutually beneficial cooperative relationships with like-minded agents and can resist exploitation from selfish agents. Researchers have proposed the use of a probabilistic reciprocity scheme that uses summary information from past interactions to decide whether or not to honor a request for help from another agent. This behavior has been found to be close to optimal in homogeneous groups and outperform exploiters in mixed groups. A major shortcoming of these experiments, however, is that the composition of the group in term of agent behaviors is fixed. We believe that real-life rational agents, on the contrary, will change their behaviors based on observed performances of di#erent behavioral traits with the goal of maximizing performance. In this paper, we present results from experiments on two distinct domains with population groups whose behavioral composition changes based on the performance of the agents. Based on the experimental results, we identify ecological niches for variants of exploitative selfish agents and robust reciprocative agents
    corecore