508 research outputs found

    Search Rank Fraud De-Anonymization in Online Systems

    Full text link
    We introduce the fraud de-anonymization problem, that goes beyond fraud detection, to unmask the human masterminds responsible for posting search rank fraud in online systems. We collect and study search rank fraud data from Upwork, and survey the capabilities and behaviors of 58 search rank fraudsters recruited from 6 crowdsourcing sites. We propose Dolos, a fraud de-anonymization system that leverages traits and behaviors extracted from these studies, to attribute detected fraud to crowdsourcing site fraudsters, thus to real identities and bank accounts. We introduce MCDense, a min-cut dense component detection algorithm to uncover groups of user accounts controlled by different fraudsters, and leverage stylometry and deep learning to attribute them to crowdsourcing site profiles. Dolos correctly identified the owners of 95% of fraudster-controlled communities, and uncovered fraudsters who promoted as many as 97.5% of fraud apps we collected from Google Play. When evaluated on 13,087 apps (820,760 reviews), which we monitored over more than 6 months, Dolos identified 1,056 apps with suspicious reviewer groups. We report orthogonal evidence of their fraud, including fraud duplicates and fraud re-posts.Comment: The 29Th ACM Conference on Hypertext and Social Media, July 201

    Detecting Singleton Review Spammers Using Semantic Similarity

    Full text link
    Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews).Comment: 6 pages, WWW 201

    Survey of review spam detection using machine learning techniques

    Get PDF

    Identifying and Profiling Radical Reviewer Collectives in Digital Product Reviews

    Get PDF
    Ecommerce sites are flooded with spam reviews and opinions. People are usually hired to impede or promote particular brands by writing extremely negative or positive reviews. It is usually performed in groups. Various studies have been conducted to identify and scan those spam groups. However, there is still a knowledge gap when it comes to detecting groups targeting a brand, instead of products only. In this study, we conducted a systematic review of recent studies related to detection of extremist reviewer groups. Most of the researchers have extracted these groups with a data mining approach over brand similarities so that users are clustered. This study is an attempt to detect spammers with various models tested by various reviewers. This study presents proven conceptual models and algorithms which have been presented in previous studies to compute the spamming level of extremist reviewers in ecommerce sites and online marketplace

    Using Data Analytics to Filter Insincere Posts from Online Social Networks A case study: Quora Insincere Questions

    Get PDF
    The internet in general and Online Social Networks (OSNs) in particular continue to play a significant role in our life where information is massively uploaded and exchanged. With such high importance and attention, abuses of such media of communication for different purposes are common. Driven by goals such as marketing and financial gains, some users use OSNs to post their misleading or insincere content. In this context, we utilized a real-world dataset posted by Quora in Kaggle.com to evaluate different mechanisms and algorithms to filter insincere and spam contents. We evaluated different preprocessing and analysis models. Moreover, we analyzed the cognitive efforts users made in writing their posts and whether that can improve the prediction accuracy. We reported the best models in terms of insincerity prediction accuracy
    corecore