1,598 research outputs found

    Identifying Spam Activity on Public Facebook Pages

    Get PDF
    Since their emergence, online social networks (OSNs) keep gaining popularity. However, many related problems have also arisen, such as the use of fake accounts for malicious activities. In this paper, we focus on identifying spammers among users that are active on public Facebook pages. We are specifically interested in identifying groups of spammers sharing similar URLs. For this purpose, we built an initial dataset based on all the content that has been posted upon feed posts on a set of public Facebook pages with high numbers of subscribers. We assumed that such public pages, with hundreds of thousands of subscribers and revolving around a common attractive topic, make an ideal ground for spamming activity. Our first contribution in this paper is a reliable methodology that helps in identifying potential spammer and non-spammer accounts that are likely to be tagged as, respectively, spammers/non-spammers upon manual verification. For that aim, we used a set of features characterizing spam activity with a coring method. This methodology, combined with manual human validation, successfully allowed us to build a dataset of spammers and non-spammers. Our second contribution is the analysis of the identified spammer accounts. We found that these accounts do not display any community-like behavior as they rarely interact with each other, and are slightly more active than non-spammers during late-night hours, while slightly less active during daytime hours. Finally, our third contribution is the proposal of a clustering approach that successfully detected 16 groups of spammers in the form of clusters of spam accounts sharing similar URLs

    POISED: Spotting Twitter Spam Off the Beaten Paths

    Get PDF
    Cybercriminals have found in online social networks a propitious medium to spread spam and malicious content. Existing techniques for detecting spam include predicting the trustworthiness of accounts and analyzing the content of these messages. However, advanced attackers can still successfully evade these defenses. Online social networks bring people who have personal connections or share common interests to form communities. In this paper, we first show that users within a networked community share some topics of interest. Moreover, content shared on these social network tend to propagate according to the interests of people. Dissemination paths may emerge where some communities post similar messages, based on the interests of those communities. Spam and other malicious content, on the other hand, follow different spreading patterns. In this paper, we follow this insight and present POISED, a system that leverages the differences in propagation between benign and malicious messages on social networks to identify spam and other unwanted content. We test our system on a dataset of 1.3M tweets collected from 64K users, and we show that our approach is effective in detecting malicious messages, reaching 91% precision and 93% recall. We also show that POISED's detection is more comprehensive than previous systems, by comparing it to three state-of-the-art spam detection systems that have been proposed by the research community in the past. POISED significantly outperforms each of these systems. Moreover, through simulations, we show how POISED is effective in the early detection of spam messages and how it is resilient against two well-known adversarial machine learning attacks
    corecore