100,702 research outputs found

    Effective Usage of Computational Trust Models in Rational Environments

    Get PDF
    Computational reputation-based trust models using statistical learning have been intensively studied for distributed systems where peers behave maliciously. However practical applications of such models in environments with both malicious and rational behaviors are still very little understood. In this paper, we study the relation between their accuracy measures and their ability to enforce cooperation among participants and discourage selfish behaviors. We provide theoretical results that show the conditions under which cooperation emerges when using computational trust models with a given accuracy and how cooperation can be still sustained while reducing the cost and accuracy of those models. Specifically, we propose a peer selection protocol that uses a computational trust model as a dishonesty detector to filter out unfair ratings. We prove that such a model with reasonable misclassification error bound in identifying malicious ratings can effectively build trust and cooperation in the system, considering rationality of participants. These results reveal two interesting observations. First, the key to the success of a reputation system in a rational environment is not a sophisticated trust learning mechanism, but an effective identity management scheme to prevent whitewashing behaviors. Second, given an appropriate identity management mechanism, a reputation-based trust model with a moderate accuracy bound can be used to enforce cooperation effectively in systems with both rational and malicious participants. As a result, in heterogeneous environments where peers use different algorithms to detect misbehavior of potential partners, cooperation may still emerge. We verify and extend these theoretical results to a variety of settings involving honest, malicious and strategic players through extensive simulation. These results will enable a much more targeted, cost-effective and realistic design for decentralized trust management systems, such as needed for peer-to-peer, electronic commerce or community systems

    An authorization policy management framework for dynamic medical data sharing

    Full text link
    In this paper, we propose a novel feature reduction approach to group words hierarchically into clusters which can then be used as new features for document classification. Initially, each word constitutes a cluster. We calculate the mutual confidence between any two different words. The pair of clusters containing the two words with the highest mutual confidence are combined into a new cluster. This process of merging is iterated until all the mutual confidences between the un-processed pair of words are smaller than a predefined threshold or only one cluster exists. In this way, a hierarchy of word clusters is obtained. The user can decide the clusters, from a certain level, to be used as new features for document classification. Experimental results have shown that our method can perform better than other methods.<br /

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field
    • 

    corecore