542 research outputs found

    Robust Trust Establishment in Decentralized Networks

    Get PDF
    The advancement in networking technologies creates new opportunities for computer users to communicate and interact with one another. Very often, these interacting parties are strangers. A relevant concern for a user is whether to trust the other party in an interaction, especially if there are risks associated with the interaction. Reputation systems are proposed as a method to establish trust among strangers. In a reputation system, a user who exhibits good behavior continuously can build a good reputation. On the other hand, a user who exhibits malicious behavior will have a poor reputation. Trust can then be established based on the reputation ratings of a user. While many research efforts have demonstrated the effectiveness of reputation systems in various situations, the security of reputation systems is not well understood within the research community. In the context of trust establishment, the goal of an adversary is to gain trust. An adversary can appear to be trustworthy within a reputation system if the adversary has a good reputation. Unfortunately, there are plenty of methods that an adversary can use to achieve a good reputation. To make things worse, there may be ways for an attacker to gain an advantage that may not be known yet. As a result, understanding an adversary is a challenging problem. The difficulty of this problem can be witnessed by how researchers attempt to prove the security of their reputation systems. Most prove security by using simulations to demonstrate that their solutions are resilient to specific attacks. Unfortunately, they do not justify their choices of the attack scenarios, and more importantly, they do not demonstrate that their choices are sufficient to claim that their solutions are secure. In this dissertation, I focus on addressing the security of reputation systems in a decentralized Peer-to-Peer (P2P) network. To understand the problem, I define an abstract model for trust establishment. The model consists of several layers. Each layer corresponds to a component of trust establishment. This model serves as a common point of reference for defining security. The model can also be used as a framework for designing and implementing trust establishment methods. The modular design of the model can also allow existing methods to inter-operate. To address the security issues, I first provide the definition of security for trust establishment. Security is defined as a measure of robustness. Using this definition, I provide analytical techniques for examining the robustness of trust establishment methods. In particular, I show that in general, most reputation systems are not robust. The analytical results lead to a better understanding of the capabilities of the adversaries. Based on this understanding, I design a solution that improves the robustness of reputation systems by using accountability. The purpose of accountability is to encourage peers to behave responsibly as well as to provide disincentive for malicious behavior. The effectiveness of the solution is validated by using simulations. While simulations are commonly used by other research efforts to validate their trust establishment methods, their choices of simulation scenarios seem to be chosen in an ad hoc manner. In fact, many of these works do not justify their choices of simulation scenarios, and neither do they show that their choices are adequate. In this dissertation, the simulation scenarios are chosen based on the capabilities of the adversaries. The simulation results show that under certain conditions, accountability can improve the robustness of reputation systems

    Trust models in ubiquitous computing

    No full text
    We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models

    An Evaluation Framework for Reputation Management Systems

    Get PDF
    Reputation management (RM) is employed in distributed and peer-to-peer networks to help users compute a measure of trust in other users based on initial belief, observed behavior, and run-time feedback. These trust values influence how, or with whom, a user will interact. Existing literature on RM focuses primarily on algorithm development, not comparative analysis. To remedy this, we propose an evaluation framework based on the trace-simulator paradigm. Trace file generation emulates a variety of network configurations, and particular attention is given to modeling malicious user behavior. Simulation is trace-based and incremental trust calculation techniques are developed to allow experimentation with networks of substantial size. The described framework is available as open source so that researchers can evaluate the effectiveness of other reputation management techniques and/or extend functionality. This chapter reports on our frameworkā€™s design decisions. Our goal being to build a general-purpose simulator, we have the opportunity to characterize the breadth of existing RM systems. Further, we demonstrate our tool using two reputation algorithms (EigenTrust and a modified TNA-SL) under varied network conditions. Our analysis permits us to make claims about the algorithmsā€™ comparative merits. We conclude that such systems, assuming their distribution is secure, are highly effective at managing trust, even against adversarial collectives

    SMART: A Subspace based Malicious Peers Detection algorithm for P2P Systems

    Get PDF
    In recent years, reputation management schemes have been proposed as promising solutions to alleviate the blindness during peer selection in distributed P2P environment where malicious peers coexist with honest ones. They indeed provide incentives for peers to contribute more resources to the system and thus promote the whole system performance. But few of them have been implemented practically since they still suffer from various security threats, such as collusion, Sybil attack and so on. Therefore, how to detect malicious peers plays a critical role in the successful work of these mechanisms, and it will also be our focus in this paper. Firstly, we define malicious peers and show their influence on the system performance. Secondly, based on Multiscale Principal Component Analysis (MSPCA) and control chart, a Subspace based MAlicious peeRs deTecting algorithm (SMART) is brought forward. SMART first reconstructs the original reputation matrix based on subspace method, and then finds malicious peers out based on Shewhart control chart. Finally, simulation results indicate that SMART can detect malicious peers efficiently and accurately

    Trust beyond reputation: A computational trust model based on stereotypes

    Full text link
    Models of computational trust support users in taking decisions. They are commonly used to guide users' judgements in online auction sites; or to determine quality of contributions in Web 2.0 sites. However, most existing systems require historical information about the past behavior of the specific agent being judged. In contrast, in real life, to anticipate and to predict a stranger's actions in absence of the knowledge of such behavioral history, we often use our "instinct"- essentially stereotypes developed from our past interactions with other "similar" persons. In this paper, we propose StereoTrust, a computational trust model inspired by stereotypes as used in real-life. A stereotype contains certain features of agents and an expected outcome of the transaction. When facing a stranger, an agent derives its trust by aggregating stereotypes matching the stranger's profile. Since stereotypes are formed locally, recommendations stem from the trustor's own personal experiences and perspective. Historical behavioral information, when available, can be used to refine the analysis. According to our experiments using Epinions.com dataset, StereoTrust compares favorably with existing trust models that use different kinds of information and more complete historical information

    A Novel Reputation Management Mechanism with Forgiveness in P2P File Sharing Networks

    Get PDF
    AbstractIn peer-to-peer (P2P) file sharing networks, it is common practice to manage each peer using reputation systems. A reputation system systematically tracks the reputation of each peer and punishes peers for malicious behaviors (like uploading bad file, or virus, etc). However, current reputation systems could hurt the normal peers, since they might occasionally make mistakes. Therefore, in this paper, we introduce forgiveness mechanism into the EigenTrust reputation system to reduce such malicious treatments and give them opportunities to gain reputation back. Particularly, we take four motivations (the severity of current offence, the frequency of offences, the compensation and the reciprocity of the offender) into consideration to measure forgiveness. The simulation work shows that the forgiveness model can repair the direct trust breakdown caused by unintentional mistakes and lead to less invalid downloads, which improves the performance of P2P file sharing systems

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field
    • ā€¦
    corecore