72,970 research outputs found

    Trust and reputation in open multi-agent systems

    No full text
    Trust and reputation are central to effective interactions in open multi-agent systems (MAS) in which agents, that are owned by a variety of stakeholders, continuously enter and leave the system. This openness means existing trust and reputation models cannot readily be used since their performance suffers when there are various (unforseen) changes in the environment. To this end, this thesis develops and evaluates FIRE, a trust and reputation model that enables autonomous agents in open MAS to evaluate the trustworthiness of their peers and to select good partners for interactions. FIRE integrates four sources of trust information under the same framework in order to provide a comprehensive assessment of an agent’s likely performance in open systems. Specifically, FIRE incorporates interaction trust, role-based trust, witness reputation, and certified reputation, that models trust resulting from direct experiences, role-based relationships, witness reports, and third-party references, respectively, to provide trust metrics in most circumstances. A novel model of reporter credibility has also been integrated to enable FIRE to effectively deal with inaccurate reports (from witnesses and referees). Finally, adaptive techniques have been introduced, which make use of the information gained from monitoring the environment, to dynamically adjust a number of FIRE’s parameters according to the actual situation an agent finds itself in. In all cases, a systematic empirical analysis is undertaken to evaluate the effectiveness of FIRE in terms of the agent’s performance

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate

    A Graph-Based Approach to Address Trust and Reputation in Ubiquitous Networks

    Get PDF
    The increasing popularity of virtual computing environments such as Cloud and Grid computing is helping to drive the realization of ubiquitous and pervasive computing. However, as computing becomes more entrenched in everyday life, the concepts of trust and risk become increasingly important. In this paper, we propose a new graph-based theoretical approach to address trust and reputation in complex ubiquitous networks. We formulate trust as a function of quality of a task and time required to authenticate agent-to-agent relationship based on the Zero-Common Knowledge (ZCK) authentication scheme. This initial representation applies a graph theory concept, accompanied by a mathematical formulation of trust metrics. The approach we propose increases awareness and trustworthiness to agents based on the values estimated for each requested task, we conclude by stating our plans for future work in this area

    Bootstrapping trust evaluations through stereotypes

    Get PDF
    Publisher PD

    Trust Strategies for the Semantic Web

    Get PDF
    Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy

    Trust on the Web: Some Web Science Research Challenges

    No full text
    Web Science is the interdisciplinary study of the World Wide Web as a first-order object in order to understand its relationship with the wider societies in which it is embedded, and in order to facilitate its future engineering as a beneficial object. In this paper, research issues and challenges relating to the vital topic of trust are reviewed, showing how the Web Science agenda requires trust to be addressed, and how addressing the challenges requires a range of disciplinary skills applied in an integrated manner

    Fuzzy argumentation for trust

    No full text
    In an open Multi-Agent System, the goals of agents acting on behalf of their owners often conflict with each other. Therefore, a personal agent protecting the interest of a single user cannot always rely on them. Consequently, such a personal agent needs to be able to reason about trusting (information or services provided by) other agents. Existing algorithms that perform such reasoning mainly focus on the immediate utility of a trusting decision, but do not provide an explanation of their actions to the user. This may hinder the acceptance of agent-based technologies in sensitive applications where users need to rely on their personal agents. Against this background, we propose a new approach to trust based on argumentation that aims to expose the rationale behind such trusting decisions. Our solution features a separation of opponent modeling and decision making. It uses possibilistic logic to model behavior of opponents, and we propose an extension of the argumentation framework by Amgoud and Prade to use the fuzzy rules within these models for well-supported decisions

    Academic Panel: Can Self-Managed Systems be trusted?

    Get PDF
    Trust can be defined as to have confidence or faith in; a form of reliance or certainty based on past experience; to allow without fear; believe; hope: expect and wish; and extend credit to. The issue of trust in computing has always been a hot topic, especially notable with the proliferation of services over the Internet, which has brought the issue of trust and security right into the ordinary home. Autonomic computing brings its own complexity to this. With systems that self-manage, the internal decision making process is less transparent and the ‘intelligence’ possibly evolving and becoming less tractable. Such systems may be used from anything from environment monitoring to looking after Granny in the home and thus the issue of trust is imperative. To this end, we have organised this panel to examine some of the key aspects of trust. The first section discusses the issues of self-management when applied across organizational boundaries. The second section explores predictability in self-managed systems. The third part examines how trust is manifest in electronic service communities. The final discussion demonstrates how trust can be integrated into an autonomic system as the core intelligence with which to base adaptivity choices upon

    An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling

    No full text
    In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models
    corecore