100 research outputs found

    The Dark Side of Micro-Task Marketplaces: Characterizing Fiverr and Automatically Detecting Crowdturfing

    Full text link
    As human computation on crowdsourcing systems has become popular and powerful for performing tasks, malicious users have started misusing these systems by posting malicious tasks, propagating manipulated contents, and targeting popular web services such as online social networks and search engines. Recently, these malicious users moved to Fiverr, a fast-growing micro-task marketplace, where workers can post crowdturfing tasks (i.e., astroturfing campaigns run by crowd workers) and malicious customers can purchase those tasks for only $5. In this paper, we present a comprehensive analysis of Fiverr. First, we identify the most popular types of crowdturfing tasks found in this marketplace and conduct case studies for these crowdturfing tasks. Then, we build crowdturfing task detection classifiers to filter these tasks and prevent them from becoming active in the marketplace. Our experimental results show that the proposed classification approach effectively detects crowdturfing tasks, achieving 97.35% accuracy. Finally, we analyze the real world impact of crowdturfing tasks by purchasing active Fiverr tasks and quantifying their impact on a target site. As part of this analysis, we show that current security systems inadequately detect crowdsourced manipulation, which confirms the necessity of our proposed crowdturfing task detection approach

    Characterizing Key Stakeholders in an Online Black-Hat Marketplace

    Get PDF
    Over the past few years, many black-hat marketplaces have emerged that facilitate access to reputation manipulation services such as fake Facebook likes, fraudulent search engine optimization (SEO), or bogus Amazon reviews. In order to deploy effective technical and legal countermeasures, it is important to understand how these black-hat marketplaces operate, shedding light on the services they offer, who is selling, who is buying, what are they buying, who is more successful, why are they successful, etc. Toward this goal, in this paper, we present a detailed micro-economic analysis of a popular online black-hat marketplace, namely, SEOClerks.com. As the site provides non-anonymized transaction information, we set to analyze selling and buying behavior of individual users, propose a strategy to identify key users, and study their tactics as compared to other (non-key) users. We find that key users: (1) are mostly located in Asian countries, (2) are focused more on selling black-hat SEO services, (3) tend to list more lower priced services, and (4) sometimes buy services from other sellers and then sell at higher prices. Finally, we discuss the implications of our analysis with respect to devising effective economic and legal intervention strategies against marketplace operators and key users.Comment: 12th IEEE/APWG Symposium on Electronic Crime Research (eCrime 2017

    Automated Crowdturfing Attacks and Defenses in Online Review Systems

    Full text link
    Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect. Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on "usefulness" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers

    Are We All in a Truman Show? Spotting Instagram Crowdturfing through Self-Training

    Full text link
    Influencer Marketing generated $16 billion in 2022. Usually, the more popular influencers are paid more for their collaborations. Thus, many services were created to boost profiles' popularity metrics through bots or fake accounts. However, real people recently started participating in such boosting activities using their real accounts for monetary rewards, generating ungenuine content that is extremely difficult to detect. To date, no works have attempted to detect this new phenomenon, known as crowdturfing (CT), on Instagram. In this work, we propose the first Instagram CT engagement detector. Our algorithm leverages profiles' characteristics through semi-supervised learning to spot accounts involved in CT activities. Compared to the supervised approaches used so far to identify fake accounts, semi-supervised models can exploit huge quantities of unlabeled data to increase performance. We purchased and studied 1293 CT profiles from 11 providers to build our self-training classifier, which reached 95\% F1-score. We tested our model in the wild by detecting and analyzing CT engagement from 20 mega-influencers (i.e., with more than one million followers), and discovered that more than 20% was artificial. We analyzed the CT profiles and comments, showing that it is difficult to detect these activities based solely on their generated content

    Building a Task Blacklist for Online Social Systems

    Get PDF
    Hiding inside the mutually-beneficial model of online crowdsourcing are malicious campaigns, which target manipulating search results or leaving fake reviews on the web. Crowdsourced manipulation reduces the quality and trustworthiness of online social media, threatening the security of cyberspace as a whole. To mitigate this problem, we developed a classification model which filters out malicious campaigns from nearly 450,000 campaigns on popular crowdsourcing platforms. We then presented this blacklist on a website, where parties adversely affected by malicious campaigns, such as targeted websites owners, legitimate workers, owners of the crowdsourcing platforms, can use this website as a tool to identify and moderate potential malicious campaigns from the web

    Online Misinformation: Challenges and Future Directions

    Get PDF
    Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications

    Fake Likers Detection on Facebook

    Get PDF
    In online social networking sites, gaining popularity has become important. The more popular a company is, the more profits it can make. A way to measure a company\u27s popularity is to check how many likes it has (e.g., the company\u27s number of likes in Facebook). To instantly and artificially increase the number of likes, some companies and business people began hiring crowd workers (aka fake likers) who send likes to a targeted page and earn money. Unfortunately, little is known about characteristics of the fake likers and how to identify them. To uncover fake likers in online social networks, in this work we (i) collect profiles of fake likers and legitimate likers by using linkage and honeypot approaches, (ii) analyze characteristics of fake likers and legitimate likers, (iii) propose and develop a fake liker detection approach, and (iv) thoroughly evaluate its performance against three baseline methods and under two attack models. Our experimental results show that our cassification model significantly outperformed the baseline methods, achieving 87.1% accuracy and 0.1 false positive rate and 0.14 false negative rate
    • …
    corecore