7 research outputs found

    An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling

    No full text
    In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models

    Agent-Based Trust and Reputation in the Context of Inaccurate Information Sources

    Get PDF
    Trust is a prevalent concept in human society that, in essence, concerns our reliance on the actions of other entities within our environment. For example, we may rely on our car starting to get to work on time, and on our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require integration of computing resources, across organisational boundaries. In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another, as certain resources may fail more often than others. For this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which of them to rely on. With this in mind, our goal is to develop mechanisms, or models, to aid decision making by an autonomous agent (the truster), when the consequences of its decisions depend on the actions of other agents (the trustees). To achieve this, we have developed a probabilistic framework for assessing trust based on a trustee's past behaviour, which we have instantiated through the creation of two novel trust models (TRAVOS and TRAVOS-C). These facilitate decision making in two different contexts with regard to trustee behaviour. First, using TRAVOS, a truster can make decisions in contexts where a trustee can only act in one of two ways: either it can cooperate, acting to the truster's advantage; or it can defect, thereby acting against the truster's interests. Second, using TRAVOS-C, a truster can make decisions about trustees that can act in a continuous range of ways, for example, taking into account the delivery time of a service. These models share an ability to account for observations of a trustee's behaviour, made either directly by the truster, or by a third party (reputation source). In the latter case, both models can cope with third party information that is unreliable, either because the sender is lying, or because it has a different world view. In addition, TRAVOS-C can assess a trustee for which there is little or no direct or reported experience, using information about other agents that share characteristics with the trustee. This is achieved using a probabilistic mechanism, which automatically accounts for the amount of correlation observed between agents' behaviour, in a truster's environment

    A Risk And Trust Security Framework For The Pervasive Mobile Environment

    Get PDF
    A pervasive mobile computing environment is typically composed of multiple fixed and mobile entities that interact autonomously with each other with very little central control. Many of these interactions may occur between entities that have not interacted with each other previously. Conventional security models are inadequate for regulating access to data and services, especially when the identities of a dynamic and growing community of entities are not known in advance. In order to cope with this drawback, entities may rely on context data to make security and trust decisions. However, risk is introduced in this process due to the variability and uncertainty of context information. Moreover, by the time the decisions are made, the context data may have already changed and, in which case, the security decisions could become invalid.With this in mind, our goal is to develop mechanisms or models, to aid trust decision-making by an entity or agent (the truster), when the consequences of its decisions depend on context information from other agents (the trustees). To achieve this, in this dissertation, we have developed ContextTrust a framework to not only compute the risk associated with a context variable, but also to derive a trust measure for context data producing agents. To compute the context data risk, ContextTrust uses Monte Carlo based method to model the behavior of a context variable. Moreover, ContextTrust makes use of time series classifiers and other simple statistical measures to derive an entity trust value.We conducted empirical analyses to evaluate the performance of ContextTrust using two real life data sets. The evaluation results show that ContextTrust can be effective in helping entities render security decisions

    High Quality P2P Service Provisioning via Decentralized Trust Management

    Get PDF
    Trust management is essential to fostering cooperation and high quality service provisioning in several peer-to-peer (P2P) applications. Among those applications are customer-to-customer (C2C) trading sites and markets of services implemented on top of centralized infrastructures, P2P systems, or online social networks. Under these application contexts, existing work does not adequately address the heterogeneity of the problem settings in practice. This heterogeneity includes the different approaches employed by the participants to evaluate trustworthiness of their partners, the diversity in contextual factors that influence service provisioning quality, as well as the variety of possible behavioral patterns of the participants. This thesis presents the design and usage of appropriate computational trust models to enforce cooperation and ensure high quality P2P service provisioning, considering the above heterogeneity issues. In this thesis, first I will propose a graphical probabilistic framework for peers to model and evaluate trustworthiness of the others in a highly heterogeneous setting. The framework targets many important issues in trust research literature: the multi-dimensionality of trust, the reliability of different rating sources, and the personalized modeling and computation of trust in a participant based on the quality of services it provides. Next, an analysis on the effective usage of computational trust models in environments where participants exhibit various behaviors, e.g., honest, rational, and malicious, will be presented. I provide theoretical results showing the conditions under which cooperation emerges when using trust learning models with a given detecting accuracy and how cooperation can still be sustained while reducing the cost and accuracy of those models. As another contribution, I also design and implement a general prototyping and simulation framework for reputation-based trust systems. The developed simulator can be used for many purposes, such as to discover new trust-related phenomena or to evaluate performance of a trust learning algorithm in complex settings. Two potential applications of computational trust models are then discussed: (1) the selection and ranking of (Web) services based on quality ratings from reputable users, and (2) the use of a trust model to choose reliable delegates in a key recovery scenario in a distributed online social network. Finally, I will identify a number of various issues in building next-generation, open reputation-based trust management systems as well as propose several future research directions starting from the work in this thesis
    corecore