4 research outputs found

    COBRA: Context-aware Bernoulli Neural Networks for Reputation Assessment

    Full text link
    Trust and reputation management (TRM) plays an increasingly important role in large-scale online environments such as multi-agent systems (MAS) and the Internet of Things (IoT). One main objective of TRM is to achieve accurate trust assessment of entities such as agents or IoT service providers. However, this encounters an accuracy-privacy dilemma as we identify in this paper, and we propose a framework called Context-aware Bernoulli Neural Network based Reputation Assessment (COBRA) to address this challenge. COBRA encapsulates agent interactions or transactions, which are prone to privacy leak, in machine learning models, and aggregates multiple such models using a Bernoulli neural network to predict a trust score for an agent. COBRA preserves agent privacy and retains interaction contexts via the machine learning models, and achieves more accurate trust prediction than a fully-connected neural network alternative. COBRA is also robust to security attacks by agents who inject fake machine learning models; notably, it is resistant to the 51-percent attack. The performance of COBRA is validated by our experiments using a real dataset, and by our simulations, where we also show that COBRA outperforms other state-of-the-art TRM systems.Comment: To be published in the Proceedings of AAAI, Feb 202

    Dynamic Credibility Threshold Assignment in Trust and Reputation Mechanisms Using PID Controller

    Get PDF
    In online shopping buyers do not have enough information about sellers and cannot inspect the products before purchasing them. To help buyers find reliable sellers, online marketplaces deploy Trust and Reputation Management (TRM) systems. These systems aggregate buyers’ feedback about the sellers they have interacted with and about the products they have purchased, to inform users within the marketplace about the sellers and products before making purchases. Thus positive customer feedback has become a valuable asset for each seller in order to attract more business. This naturally creates incentives for cheating, in terms of introducing fake positive feedback. Therefore, an important responsibility of TRM systems is to aid buyers find genuine feedback (reviews) about different sellers. Recent TRM systems achieve this goal by selecting and assigning credible advisers to any new customer/buyer. These advisers are selected among the buyers who have had experience with a number of sellers and have provided feedback for their services and goods. As people differ in their tastes, the buyer feedback that would be most useful should come from advisers with similar tastes and values. In addition, the advisers should be honest, i.e. provide truthful reviews and ratings, and not malicious, i.e. not collude with sellers to favour them or with other buyers to badmouth some sellers. Defining the boundary between dishonest and honest advisers is very important. However, currently, there is no systematic approach for setting the honesty threshold which divides benevolent advisers from the malicious ones. The thesis addresses this problem and proposes a market-adaptive honesty threshold management mechanism. In this mechanism the TRM system forms a feedback system which monitors the current status of the e-marketplace. According to the status of the e-marketplace the feedback system improves the performance utilizing PID controller from the field of control systems. The responsibility of this controller is to set the the suitable value of honesty threshold. The results of experiments, using simulation and real-world dataset show that the market-adaptive honesty threshold allows to optimize the performance of the marketplace with respect to throughput and buyer satisfaction
    corecore