34,725 research outputs found

    A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems

    Get PDF
    Trust and reputation models for distributed, collaborative systems have been studied and applied in several domains, in order to stimulate cooperation while preventing selfish and malicious behaviors. Nonetheless, such models have received less attention in the process of specifying and analyzing formally the functionalities of the systems mentioned above. The objective of this paper is to define a process algebraic framework for the modeling of systems that use (i) trust and reputation to govern the interactions among nodes, and (ii) communication models characterized by a high level of adaptiveness and flexibility. Hence, we propose a formalism for verifying, through model checking techniques, the robustness of these systems with respect to the typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    Towards an Evaluation Model of Trust and Reputation Management Systems

    Get PDF
    The paper presents a set of concepts which can establish a basis for the creation of new evaluation model of trust and reputation management systems (TRM). The presented approach takes into account essential characteristics of such systems to provide an assessment of its robustness. The model also specifies measures of effectiveness of trust and reputation systems. There is still a need to create a comprehensive evaluation model of attacks on trust and reputation management systems and evaluation model of TRM systems itself, which could facilitate establishing a framework to deeply evaluate the security of existing TRM systems. We believe that this paper could be perceived as a small step forward towards this goal

    Robust Trust Establishment in Decentralized Networks

    Get PDF
    The advancement in networking technologies creates new opportunities for computer users to communicate and interact with one another. Very often, these interacting parties are strangers. A relevant concern for a user is whether to trust the other party in an interaction, especially if there are risks associated with the interaction. Reputation systems are proposed as a method to establish trust among strangers. In a reputation system, a user who exhibits good behavior continuously can build a good reputation. On the other hand, a user who exhibits malicious behavior will have a poor reputation. Trust can then be established based on the reputation ratings of a user. While many research efforts have demonstrated the effectiveness of reputation systems in various situations, the security of reputation systems is not well understood within the research community. In the context of trust establishment, the goal of an adversary is to gain trust. An adversary can appear to be trustworthy within a reputation system if the adversary has a good reputation. Unfortunately, there are plenty of methods that an adversary can use to achieve a good reputation. To make things worse, there may be ways for an attacker to gain an advantage that may not be known yet. As a result, understanding an adversary is a challenging problem. The difficulty of this problem can be witnessed by how researchers attempt to prove the security of their reputation systems. Most prove security by using simulations to demonstrate that their solutions are resilient to specific attacks. Unfortunately, they do not justify their choices of the attack scenarios, and more importantly, they do not demonstrate that their choices are sufficient to claim that their solutions are secure. In this dissertation, I focus on addressing the security of reputation systems in a decentralized Peer-to-Peer (P2P) network. To understand the problem, I define an abstract model for trust establishment. The model consists of several layers. Each layer corresponds to a component of trust establishment. This model serves as a common point of reference for defining security. The model can also be used as a framework for designing and implementing trust establishment methods. The modular design of the model can also allow existing methods to inter-operate. To address the security issues, I first provide the definition of security for trust establishment. Security is defined as a measure of robustness. Using this definition, I provide analytical techniques for examining the robustness of trust establishment methods. In particular, I show that in general, most reputation systems are not robust. The analytical results lead to a better understanding of the capabilities of the adversaries. Based on this understanding, I design a solution that improves the robustness of reputation systems by using accountability. The purpose of accountability is to encourage peers to behave responsibly as well as to provide disincentive for malicious behavior. The effectiveness of the solution is validated by using simulations. While simulations are commonly used by other research efforts to validate their trust establishment methods, their choices of simulation scenarios seem to be chosen in an ad hoc manner. In fact, many of these works do not justify their choices of simulation scenarios, and neither do they show that their choices are adequate. In this dissertation, the simulation scenarios are chosen based on the capabilities of the adversaries. The simulation results show that under certain conditions, accountability can improve the robustness of reputation systems

    Interactive Reputation Systems - How to Cope with Malicious Behavior in Feedback Mechanisms

    Get PDF
    Early reputation systems use simple computation metrics that can easily be manipulated by malicious actors. Advanced computation models that mitigate their weaknesses, however, are non-transparent to the end-users thus lowering their understandability and the users’ trust towards the reputation system. The paper proposes the concept of interactive reputation systems that combine the cognitive capabilities of the user with the advantages of robust metrics while preserving the system’s transparency. Results of the evaluation show that interactive reputation systems increase both the users’ detection ability (robustness) and understanding of malicious behavior while avoiding trade-offs in usability

    Evaluating online trust using machine learning methods

    Get PDF
    Trust plays an important role in e-commerce, P2P networks, and information filtering. Current challenges in trust evaluations include: (1) fnding trustworthy recommenders, (2) aggregating heterogeneous trust recommendations of different trust standards based on correlated observations and different evaluation processes, and (3) managing efficiently large trust systems where users may be sparsely connected and have multiple local reputations. The purpose of this dissertation is to provide solutions to these three challenges by applying ordered depth-first search, neural network, and hidden Markov model techniques. It designs an opinion filtered recommendation trust model to derive personal trust from heterogeneous recommendations; develops a reputation model to evaluate recommenders\u27 trustworthiness and expertise; and constructs a distributed trust system and a global reputation model to achieve efficient trust computing and management. The experimental results show that the proposed three trust models are reliable. The contributions lie in: (1) novel application of neural networks in recommendation trust evaluation and distributed trust management; (2) adaptivity of the proposed neural network-based trust models to accommodate dynamic and multifacet properties of trust; (3) robustness of the neural network-based trust models to the noise in training data, such as deceptive recommendations; (4) efficiency and parallelism of computation and load balance in distributed trust evaluations; and (5) novel application of Hidden Markov Models in recommenders\u27 reputation evaluation

    Evaluating the Role of Trust in Consumer Adoption of Mobile Payment Systems: An Empirical Analysis

    Get PDF
    Consumer adoption of mobile payment (m-payment) solutions is low compared to the acceptance of traditional forms of payments. Motivated by this fact, we propose and test a “trust-theoretic model for consumer adoption of m-payment systems.” The model, grounded in literature on “technology adoption” and “trust,” not only theorizes the role of consumer trust in m-payment adoption, but also identifies the facilitators for consumer trust in m-payment systems. It proposes two broad dimensions of trust facilitators: “mobile service provider characteristics” and “mobile technology environment characteristics.” The model is empirically validated via a sample of potential adopters in Singapore. In contrast to other contexts, results suggest the overarching importance of “consumer trust in m-payment systems” as compared to other technology adoption factors. Further, differential importance of the theorized trust facilitators of “perceived reputation” and “perceived opportunism” of the mobile service provider, and “perceived environmental risk” and “perceived structural assurance” of the mobile technology, are also highlighted. A series of post-hoc analyses establish the robustness of the theorized configuration of constructs. Subsequent, sub-group analyses highlight the differential significance of trust facilitators for different user sub-groups. Implications for research and practice emerging out of this study are also discussed

    TRIVIA: visualizing reputation profiles to detect malicious sellers in electronic marketplaces

    Get PDF
    Reputation systems are an essential part of electronic marketplaces that provide a valuable method to identify honest sellers and punish malicious actors. Due to the continuous improvement of the computation models applied, advanced reputation systems have become non-transparent and incomprehensible to the end-user. As a consequence, users become skeptical and lose their trust toward the reputation system. In this work, we are taking a step to increase the transparency of reputation systems by means of providing interactive visual representations of seller reputation profiles. We thereto propose TRIVIA - a visual analytics tool to evaluate seller reputation. Besides enhancing transparency, our results show that through incorporating the visual-cognitive capabilities of a human analyst and the computing power of a machine in TRIVIA, malicious sellers can be reliably identified. In this way we provide a new perspective on how the problem of robustness could be addressed

    Voting Systems with Trust Mechanisms in Cyberspace: Vulnerabilities and Defenses

    Get PDF
    With the popularity of voting systems in cyberspace, there is growing evidence that current voting systems can be manipulated by fake votes. This problem has attracted many researchers working on guarding voting systems in two areas: relieving the effect of dishonest votes by evaluating the trust of voters, and limiting the resources that can be used by attackers, such as the number of voters and the number of votes. In this paper, we argue that powering voting systems with trust and limiting attack resources are not enough. We present a novel attack named as Reputation Trap (RepTrap). Our case study and experiments show that this new attack needs much less resources to manipulate the voting systems and has a much higher success rate compared with existing attacks. We further identify the reasons behind this attack and propose two defense schemes accordingly. In the first scheme, we hide correlation knowledge from attackers to reduce their chance to affect the honest voters. In the second scheme, we introduce robustness-of-evidence, a new metric, in trust calculation to reduce their effect on honest voters. We conduct extensive experiments to validate our approach. The results show that our defense schemes not only can reduce the success rate of attacks but also significantly increase the amount of resources an adversary needs to launch a successful attack

    Addressing the Issues of Coalitions and Collusion in Multiagent Systems

    Get PDF
    In the field of multiagent systems, trust and reputation systems are intended to assist agents in finding trustworthy partners with whom to interact. Earlier work of ours identified in theory a number of security vulnerabilities in trust and reputation systems, weaknesses that might be exploited by malicious agents to bypass the protections offered by such systems. In this work, we begin by developing the TREET testbed, a simulation platform that allows for extensive evaluation and flexible experimentation with trust and reputation technologies. We use this testbed to experimentally validate the practicality and gravity of attacks against vulnerabilities. Of particular interest are attacks that are collusive in nature: groups of agents (coalitions) working together to improve their expected rewards. But the issue of coalitions is not unique to trust and reputation; rather, it cuts across a range of fields in multiagent systems and beyond. In some scenarios, coalitions may be unwanted or forbidden; in others they may be benign or even desirable. In this document, we propose a method for detecting coalitions and identifying coalition members, a capability that is likely to be valuable in many of the diverse fields where coalitions may be of interest. Our method makes use of clustering in benefit space (a high-dimensional space reflecting how agents benefit others in the system) in order to identify groups of agents who benefit similar sets of agents. A statistical technique is then used to identify which clusters contain coalitions. Experimentation using the TREET platform verifies the effectiveness of this approach. A series of enhancements to our method are also introduced, which improve the accuracy and robustness of the algorithm. To demonstrate how this broadly-applicable tool can be used to address domain-specific problems, we focus again on trust and reputation systems. We show how, by incorporating our work into one such system (the existing Beta Reputation System), we can provide resistance to collusion. We conclude with a detailed discussion of the value of our work for a wide range of environments, including a variety of multiagent systems and real-world settings
    corecore