8,438 research outputs found

    A Trust-based Multiagent System

    Get PDF
    Cooperative agent systems often do not account for sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise. Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent. Adding trust to an interactive partially observable Markov decision process (I-POMDP) allows trust levels to be continuously monitored and corrected enabling agents to make better decisions. The addition of trust modeling increases the decision process calculations, and solves more complex trust problems that are representative of the human world. The modified I-POMDP reward function and belief models can be used to accurately track the trust levels of agents with hidden agendas. Testing demonstrates that agents quickly identify the hidden trust levels to mitigate the impact of a deceitful agent

    Towards a Model of Open and Reliable Cognitive Multiagent Systems: Dealing with Trust and Emotions

    Get PDF
     Open multiagent systems are those in which the agents can enter or leave the system freely. In these systems any entity with unknown intention can occupy the environment. For this scenario trust and reputation mechanisms should be used to choose partners in order to request services or delegate tasks. Trust and reputation models have been proposed in the Multiagent Systems area as a way to assist agents to select good partners in order to improve interactions between them. Most of the trust and reputation models proposed in the literature take into account their functional aspects, but not how they affect the reasoning cycle of the agent. That is, under the perspective of the agent, a trust model is usually just a “black box” and the agents usually does not take into account their emotional state to make decisions as well as humans often do. As well as trust, agent’s emotions also have been studied with the aim of making the actions and reactions of the agents more like those of humans being in order to imitate their reasoning and decision making mechanisms. In this paper we analyse some proposed models found in the literature and propose a BDI and multi-context based agent model which includes emotional reasoning to lead trust and reputation in open multiagent systems

    Multiagent cooperation for solving global optimization problems: an extendible framework with example cooperation strategies

    Get PDF
    This paper proposes the use of multiagent cooperation for solving global optimization problems through the introduction of a new multiagent environment, MANGO. The strength of the environment lays in itsflexible structure based on communicating software agents that attempt to solve a problem cooperatively. This structure allows the execution of a wide range of global optimization algorithms described as a set of interacting operations. At one extreme, MANGO welcomes an individual non-cooperating agent, which is basically the traditional way of solving a global optimization problem. At the other extreme, autonomous agents existing in the environment cooperate as they see fit during run time. We explain the development and communication tools provided in the environment as well as examples of agent realizations and cooperation scenarios. We also show how the multiagent structure is more effective than having a single nonlinear optimization algorithm with randomly selected initial points

    Context-dependent Trust Decisions with Subjective Logic

    Full text link
    A decision procedure implemented over a computational trust mechanism aims to allow for decisions to be made regarding whether some entity or information should be trusted. As recognised in the literature, trust is contextual, and we describe how such a context often translates into a confidence level which should be used to modify an underlying trust value. J{\o}sang's Subjective Logic has long been used in the trust domain, and we show that its operators are insufficient to address this problem. We therefore provide a decision-making approach about trust which also considers the notion of confidence (based on context) through the introduction of a new operator. In particular, we introduce general requirements that must be respected when combining trustworthiness and confidence degree, and demonstrate the soundness of our new operator with respect to these properties.Comment: 19 pages, 4 figures, technical report of the University of Aberdeen (preprint version

    An End-to-End Conversational Style Matching Agent

    Full text link
    We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    Trust-Based Fusion of Untrustworthy Information in Crowdsourcing Applications

    No full text
    In this paper, we address the problem of fusing untrustworthy reports provided from a crowd of observers, while simultaneously learning the trustworthiness of individuals. To achieve this, we construct a likelihood model of the userss trustworthiness by scaling the uncertainty of its multiple estimates with trustworthiness parameters. We incorporate our trust model into a fusion method that merges estimates based on the trust parameters and we provide an inference algorithm that jointly computes the fused output and the individual trustworthiness of the users based on the maximum likelihood framework. We apply our algorithm to cell tower localisation using real-world data from the OpenSignal project and we show that it outperforms the state-of-the-art methods in both accuracy, by up to 21%, and consistency, by up to 50% of its predictions. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved

    Rational Trust Modeling

    Get PDF
    Trust models are widely used in various computer science disciplines. The main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of "rational trust modeling" is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is where the novelty of our approach comes from. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer's perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivise trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses
    corecore