10 research outputs found

    Detection and Filtering of Collaborative Malicious Users in Reputation System using Quality Repository Approach

    Full text link
    Online reputation system is gaining popularity as it helps a user to be sure about the quality of a product/service he wants to buy. Nonetheless online reputation system is not immune from attack. Dealing with malicious ratings in reputation systems has been recognized as an important but difficult task. This problem is challenging when the number of true user's ratings is relatively small and unfair ratings plays majority in rated values. In this paper, we have proposed a new method to find malicious users in online reputation systems using Quality Repository Approach (QRA). We mainly concentrated on anomaly detection in both rating values and the malicious users. QRA is very efficient to detect malicious user ratings and aggregate true ratings. The proposed reputation system has been evaluated through simulations and it is concluded that the QRA based system significantly reduces the impact of unfair ratings and improve trust on reputation score with lower false positive as compared to other method used for the purpose.Comment: 14 pages, 5 figures, 5 tables, submitted to ICACCI 2013, Mysore, indi

    Defending online reputation systems against collaborative unfair raters through signal modeling and trust

    No full text
    Online feedback-based rating systems are gaining popularity. Dealing with collaborative unfair ratings in such systems has been recognized as an important but difficult problem. This problem is challenging especially when the number of honest ratings is relatively small and unfair ratings can contribute to a significant portion of the overall ratings. In addition, the lack of unfair rating data from real human users is another obstacle toward realistic evaluation of defense mechanisms. In this paper, we propose a set of methods that jointly detect smart and collaborative unfair ratings based on signal modeling. Based on the detection, a framework of trust-assisted rating aggregation system is developed. Furthermore, we design and launch a Rating Challenge to collect unfair rating data from real human users. The proposed system is evaluated through simulations as well as experiments using real attack data. Compared with existing schemes, the proposed system can significantly reduce the impact from collaborative unfair ratings. Copyright 2009 ACM

    Binary Hypothesis Testing Game with Training Data

    Full text link
    We introduce a game-theoretic framework to study the hypothesis testing problem, in the presence of an adversary aiming at preventing a correct decision. Specifically, the paper considers a scenario in which an analyst has to decide whether a test sequence has been drawn according to a probability mass function (pmf) P_X or not. In turn, the goal of the adversary is to take a sequence generated according to a different pmf and modify it in such a way to induce a decision error. P_X is known only through one or more training sequences. We derive the asymptotic equilibrium of the game under the assumption that the analyst relies only on first order statistics of the test sequence, and compute the asymptotic payoff of the game when the length of the test sequence tends to infinity. We introduce the concept of indistinguishability region, as the set of pmf's that can not be distinguished reliably from P_X in the presence of attacks. Two different scenarios are considered: in the first one the analyst and the adversary share the same training sequence, in the second scenario, they rely on independent sequences. The obtained results are compared to a version of the game in which the pmf P_X is perfectly known to the analyst and the adversary

    Review Manipulation: Literature Review, and Future Research Agenda

    Get PDF
    Background: The phenomenon of review manipulation and fake reviews has gained Information Systems (IS) scholars’ attention during recent years. Scholarly research in this domain has delved into the causes and consequences of review manipulation. However, we find that the findings are diverse, and the studies do not portray a systematic approach. This study synthesizes the findings from a multidisciplinary perspective and presents an integrated framework to understand the mechanism of review manipulation. Method: The study reviews 88 relevant articles on review manipulation spanning a decade and a half. We adopted an iterative coding approach to synthesizing the literature on concepts and categorized them independently into potential themes. Results: We present an integrated framework that shows the linkages between the different themes, namely, the prevalence of manipulation, impact of manipulation, conditions and choice for manipulation decision, characteristics of fake reviews, models for detecting spam reviews, and strategies to deal with manipulation. We also present the characteristics of review manipulation and cover both operational and conceptual issues associated with the research on this topic. Conclusions: Insights from the study will guide future research on review manipulation and fake reviews. The study presents a holistic view of the phenomenon of review manipulation. It informs various online platforms to address fake reviews towards building a healthy and sustainable environment

    Architecture Supporting Computational Trust Formation

    Get PDF
    Trust is a concept that has been used in computing to support better decision making. For example, trust can be used in access control. Trust can also be used to support service selection. Although certain elements of trust such as reputation has gained widespread acceptance, a general model of trust has so far not seen widespread usage. This is due to the challenges of implementing a general trust model. In this thesis, a middleware based approach is proposed to address the implementation challenges. The thesis proposes a general trust model known as computational trust. Computational trust is based on research in social psychology. An individual’s computational trust is formed with the support of the proposed computational trust architecture. The architecture consists of a middleware and middleware clients. The middleware can be viewed as a representation of the individual that shares its knowledge with all the middleware clients. Each application uses its own middleware client to form computational trust for its decision making needs. Computational trust formation can be adapted to changing circumstances. The thesis also proposed algorithms for computational trust formation. Experiments, evaluations and scenarios are also presented to demonstrate the feasibility of the middleware based approach to computational trust formation

    Evaluating collaborative filtering over time

    Get PDF
    Recommender systems have become essential tools for users to navigate the plethora of content in the online world. Collaborative filtering—a broad term referring to the use of a variety, or combination, of machine learning algorithms operating on user ratings—lies at the heart of recommender systems’ success. These algorithms have been traditionally studied from the point of view of how well they can predict users’ ratings and how precisely they rank content; state of the art approaches are continuously improved in these respects. However, a rift has grown between how filtering algorithms are investigated and how they will operate when deployed in real systems. Deployed systems will continuously be queried for personalised recommendations; in practice, this implies that system administrators will iteratively retrain their algorithms in order to include the latest ratings. Collaborative filtering research does not take this into account: algorithms are improved and compared to each other from a static viewpoint, while they will be ultimately deployed in a dynamic setting. Given this scenario, two new problems emerge: current filtering algorithms are neither (a) designed nor (b) evaluated as algorithms that must account for time. This thesis addresses the divergence between research and practice by examining how collaborative filtering algorithms behave over time. Our contributions include: 1. A fine grained analysis of temporal changes in rating data and user/item similarity graphs that clearly demonstrates how recommender system data is dynamic and constantly changing. 2. A novel methodology and time-based metrics for evaluating collaborative filtering over time, both in terms of accuracy and the diversity of top-N recommendations. 3. A set of hybrid algorithms that improve collaborative filtering in a range of different scenarios. These include temporal-switching algorithms that aim to promote either accuracy or diversity; parameter update methods to improve temporal accuracy; and re-ranking a subset of users’ recommendations in order to increase diversity. 4. A set of temporal monitors that secure collaborative filtering from a wide range of different temporal attacks by flagging anomalous rating patterns. We have implemented and extensively evaluated the above using large-scale sets of user ratings; we further discuss how this novel methodology provides insight into dimensions of recommender systems that were previously unexplored. We conclude that investigating collaborative filtering from a temporal perspective is not only more suitable to the context in which recommender systems are deployed, but also opens a number of future research opportunities
    corecore