10 research outputs found

    Anonymity and Rewards in Peer Rating Systems

    Get PDF
    When peers rate each other, they may choose to rate inaccurately in order to boost their own reputation or unfairly lower another’s. This could be successfully mitigated by having a reputation server incentivise accurate ratings with a reward. However, assigning rewards becomes a challenge when ratings are anonymous, since the reputation server cannot tell which peers to reward for rating accurately. To address this, we propose an anonymous peer rating system in which users can be rewarded for accurate ratings, and we formally define its model and security requirements. In our system ratings are rewarded in batches, so that users claiming their rewards only reveal they authored one in this batch of ratings. To ensure the anonymity set of rewarded users is not reduced, we also split the reputation server into two entities, the Rewarder, who knows which ratings are rewarded, and the Reputation Holder, who knows which users were rewarded. We give a provably secure construction satisfying all the security properties required. For our construction we use a modification of a Direct Anonymous Attestation scheme to ensure that peers can prove their own reputation when rating others, and that multiple feedback on the same subject can be detected. We then use Linkable Ring Signatures to enable peers to be rewarded for their accurate ratings, while still ensuring that ratings are anonymous. Our work results in a system which allows for accurate ratings to be rewarded, whilst still providing anonymity of ratings with respect to the central entities managing the system

    A New Approach to Modelling Centralised Reputation Systems

    Get PDF
    A reputation system assigns a user or item a reputation value which can be used to evaluate trustworthiness. Blömer, Juhnke and Kolb in 2015, and Kaafarani, Katsumata and Solomon in 2018, gave formal models for \mathit{centralised} reputation systems, which rely on a central server and are widely used by service providers such as AirBnB, Uber and Amazon. In these models, reputation values are given to items, instead of users. We advocate a need for shift in how reputation systems are modelled, whereby reputation values are given to users, instead of items, and each user has unlinkable items that other users can give feedback on, contributing to their reputation value. This setting is not captured by the previous models, and we argue it captures more realistically the functionality and security requirements of a reputation system. We provide definitions for this new model, and give a construction from standard primitives, proving it satisfies these security requirements. We show that there is a low efficiency cost for this new functionality

    Logics to Reason Formally About Trust Computation and Manipulation

    No full text
    Trust represents a fundamental, complementary ingredient for the success of security mechanisms in computer science, as it goes beyond the intrinsic, technical aspects of cybersecurity, by involving the subjective perception of users, the willingness to collaborate and expose own resources and capabilities, and the judgement about the expected behavior of other parties. Computational notions of trust are formalized to support automatically the process of building and maintaining trust infrastructures, and mathematical logics provide the formal means to reason about the efficacy of such a process. In this work we advocate the use of two logical approaches to the modeling and verification of the two main tasks at the base of any trust infrastructure: the initial computation of trust values and the dynamic manipulation of such values
    corecore