7 research outputs found

    A Mechanism that Provides Incentives for Truthful Feedback in Peer-to-Peer Systems

    Get PDF
    We propose a mechanism for providing the incentives for reporting truthful feedback in a peer-to-peer system for exchanging services (or content). This mechanism is to complement reputation mechanisms that employ ratings' feedback on the various transactions in order to provide incentives to peers for offering better services to others. Under our approach, each of the transacting peers (rather than just the client) submits a rating on the performance of their mutual transaction. If these are in disagreement, then both transacting peers are punished, since such an occasion is a sign that one of them is lying. The severity of each peer's punishment is determined by his corresponding non- credibility metric; this is maintained by the mechanism and evolves according to the peer's record. When under punishment, a peer does not transact with others. We model the punishment effect of the mechanism in a peer-to-peer system as a Markov chain that is experimentally proved to be very accurate. According to this model, the credibility mechanism leads the peer-to-peer system to a desirable steady state isolating liars. Then, we define a procedure for the optimization of the punishment parameters of the mechanism for peer-to-peer systems of various characteristics. We experimentally prove that this optimization procedure is effective and necessary for the successful employment of the mechanism in real peer-to-peer systems. Then, the optimized credibility mechanism is combined with reputation-based policies to provide a complete solution for high performance and truthful rating in peer-to-peer systems. The combined mechanism was experimentally proved to deal very effectively with large fractions of collaborated liar peers that follow static or dynamic rational lying strategies in peer-to-peer systems with dynamically renewed population, while the efficiency loss induced to sincere peers by the presence of liars is diminished. Finally, we describe the potential implementation of the mechanism in real peer-to-peer systems

    Addressing Wealth Inequality Problem in Blockchain-Enabled Knowledge Community with Reputation-Based Incentive Mechanism

    Get PDF
    An increasing number of online knowledge communities have started incorporating the cut-edge FinTech, such as the tokenbased incentive mechanism running on blockchain, into their ecosystems. However, the improper design of incentive mechanisms may result in reward monopoly, which has been observed to harm the ecosystems of exiting communities. This study is aimed to ensure that the key factors involved in users’ reward distribution can truly reflect their contributions to the community so as to increase the equity of wealth distribution. It is one of the first to comprehensively balance a user’s historical and current contributions in reward distribution, which has not received sufficient attention from extant research. The simulation analysis demonstrates that the proposed solution of amending the existing incentive mechanism by incorporating a refined reputation indicator significantly increases the equity of rewards distribution and effectively enlarges the cost of achieving reward monopoly

    A Reputation-based Framework for Honest Provenance Reporting

    Get PDF
    Given the distributed, heterogenous, and dynamic nature of service-based IoT systems, capturing circumstances data underlying service provisions becomes increasingly important for understanding process flow and tracing how outputs came about, thus enabling clients to make more informed decisions regarding future interaction partners. Whilst service providers are the main source of such circumstances data, they may often be reluctant to release it, e.g. due to the cost and effort required, or to protect their interests. In response, this paper introduces a reputation-based framework, guided by intelligent software agents, to support the sharing of truthful circumstances information by providers. In this framework, assessor agents, acting on behalf of clients, rank and select service providers according to reputation, while provider agents, acting on behalf of service providers, learn from the environment and adjust provider’s circumstances provision policies in the direction that increases provider profit with respect to perceived reputation. The novelty of the reputation assessment model adopted by assessor agents lies in affecting provider reputation scores by whether or not they reveal truthful circumstances data underlying their service provisions, in addition to other factors commonly adopted by existing reputation schemes. The effectiveness of the proposed framework is demonstrated through an agent-based simulation including robustness against a number of attacks, with a comparative performance analysis against FIRE as a baseline reputation model

    A reputation-based framework for honest provenance reporting

    Get PDF
    Given the distributed, heterogenous, and dynamic nature of service-based IoT systems, capturing circumstances data underlying service provisions becomes increasingly important for understanding process flow and tracing how outputs came about, thus enabling clients to make more informed decisions regarding future interaction partners. Whilst service providers are the main source of such circumstances data, they may often be reluctant to release it, e.g. due to the cost and effort required, or to protect their interests. In response, this paper introduces a reputation-based framework, guided by intelligent software agents, to support the sharing of truthful circumstances information by providers. In this framework, assessor agents, acting on behalf of clients, rank and select service providers according to reputation, while provider agents, acting on behalf of service providers, learn from the environment and adjust provider’s circumstances provision policies in the direction that increases provider profit with respect to perceived reputation. The novelty of the reputation assessment model adopted by assessor agents lies in affecting provider reputation scores by whether or not they reveal truthful circumstances data underlying their service provisions, in addition to other factors commonly adopted by existing reputation schemes. The effectiveness of the proposed framework is demonstrated through an agent-based simulation including robustness against a number of attacks, with a comparative performance analysis against FIRE as a baseline reputation model
    corecore