91 research outputs found

    An Army of Me: Sockpuppets in Online Discussion Communities

    Full text link
    In online discussion communities, users can interact and share information and opinions on a wide variety of topics. However, some users may create multiple identities, or sockpuppets, and engage in undesired behavior by deceiving others or manipulating discussions. In this work, we study sockpuppetry across nine discussion communities, and show that sockpuppets differ from ordinary users in terms of their posting behavior, linguistic traits, as well as social network structure. Sockpuppets tend to start fewer discussions, write shorter posts, use more personal pronouns such as "I", and have more clustered ego-networks. Further, pairs of sockpuppets controlled by the same individual are more likely to interact on the same discussion at the same time than pairs of ordinary users. Our analysis suggests a taxonomy of deceptive behavior in discussion communities. Pairs of sockpuppets can vary in their deceptiveness, i.e., whether they pretend to be different users, or their supportiveness, i.e., if they support arguments of other sockpuppets controlled by the same user. We apply these findings to a series of prediction tasks, notably, to identify whether a pair of accounts belongs to the same underlying user or not. Altogether, this work presents a data-driven view of deception in online discussion communities and paves the way towards the automatic detection of sockpuppets.Comment: 26th International World Wide Web conference 2017 (WWW 2017

    ‘Questions about Dawlah. DM me, plz.’ The Sock Puppet Problem in Online Terrorism Research

    Get PDF
    This paper explores the problem of deception in online terrorism research. While conducting research into the growing phenomenon of female migration to Islamic State-held territory by Western females, we began following a Twitter account exhibiting suspicious activity. The account owner – believed to be a Canadian teenage female – indicated interest in learning more about joining the IS. We tracked this account for three weeks in order to discover more information about its activities and thus to develop a set of key indicators that might help predict future migration risk. We subsequently learned it was a fake account (‘sock puppet’) established to fool IS recruiters. The operation of such ruses and the problems they create is discussed here

    The Future of Cyber-Enabled Influence Operations: Emergent Technologies, Disinformation, and the Destruction of Democracy

    Get PDF
    Nation-states have been embracing online influence campaigns through disinformation at breakneck speeds. Countries such as China and Russia have completely revamped their military doctrine to information-first platforms [1, 2] (Mattis, Peter. (2018). China’s Three Warfares in Perspective. War on the Rocks. Special Series: Ministry of Truth. https://warontherocks.com/2018/01/chinas-three-warfares-perspective/, Cunningham, C. (2020). A Russian Federation Information Warfare Primer. Then Henry M. Jackson School of International Studies. Washington University. https://jsis.washington.edu/news/a-russian-federation-information-war fare-primer/.) to compete with the United States and the West. The Chinese principle of “Three Warfares” and Russian Hybrid Warfare have been used and tested across the spectrum of operations ranging from competition to active conflict. With the COVID19 pandemic limiting most means of face-to-face interpersonal communi-cation, many other nations have transitioned to online tools to influence audiences both domestically and abroad [3] (Strick, B. (2020). COVID-19 Disinformation: Attempted Influence in Disguise. Australian Strategic Policy Institute. International Cyber Policy Center. https://www.aspi.org.au/report/covid-19-disinformation.) to create favorable environments for their geopolitical goals and national objectives. This chapter focuses on the landscape that allows nations like China and Russia to attack democratic institutions and discourse within the United States, the strategies and tactics employed in these campaigns, and the emergent technologies that will enable these nations to gain an advantage with key populations within their spheres of influence or to create a disadvantage to their competitors within their spheres of influence. Advancements in machine learning through generative adversarial networks [4] (Creswell, A; White, T; Dumoulin, V; Arulkumaran, K; Sengupta, B; Bharath, A. (2017) Generative Adversarial Networks: An Overview. IEE-SPM. April 2017. https://arxiv.org/pdf/1710.07035.pdf.) that create deepfakes [5] (Whit-taker, L; Letheren, K; Mulcahy, R. (2021). The Rise of Deepfakes: A Conceptual J. Littell envelope symbolenvelope symbolenvelope symbol Army Cyber Institute at the West Point, United States Military Academy, West Point, NY 10996, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 A.Farhadietal. (eds.), The Great Power Competition Volume 3, https://doi.org/10.1007/978-3-031-04586-8_10 197 198 J. Littell Framework and Research Agenda for Marketing. https://journals.sagepub.com/doi/ abs/10.1177/1839334921999479.) and attention-based transformers [6](https:// arxiv.org/abs/1810.04805.) (Devlin et al., 2018) that create realistic speech patterns and interaction will continue to plague online discussion and information spread, attempting to cause further partisan divisions and decline of U.S. stature on the world stage and democracy as a whole.https://digitalcommons.usmalibrary.org/aci_books/1020/thumbnail.jp

    Automated Detection of Sockpuppet Accounts in Wikipedia

    Get PDF
    Wikipedia is a free Internet-based encyclopedia that is built and maintained via the open-source collaboration of a community of volunteers. Wikipedia’s purpose is to benefit readers by acting as a widely accessible and free encyclopedia, a comprehensive written synopsis that contains information on all discovered branches of knowledge. The website has millions of pages that are maintained by thousands of volunteer editors. Unfortunately, given its open-editing format, Wikipedia is highly vulnerable to malicious activity, including vandalism, spam, undisclosed paid editing, etc. Malicious users often use sockpuppet accounts to circumvent a block or a ban imposed by Wikipedia administrators on the person’s original account. A sockpuppet is an “online identity used for the purpose of deception.” Usually, several sockpuppet accounts are controlled by a unique individual (or entity) called a puppetmaster. Currently, suspected sockpuppet accounts are manually verified by Wikipedia administrators, which makes the process slow and inefficient. The primary objective of this research is to develop an automated ML and neural-network-based system to recognize the patterns of sockpuppet accounts as early as possible and recommend suspension. We address the problem as a binary classification task and propose a set of new features to capture suspicious behavior that considers user activity and analyzes the contributed content. To comply with this work, we have focused on account-based and content-based features. Our solution was bifurcated into developing a strategy to automatically detect and categorize suspicious edits made by the same author from multiple accounts. We hypothesize that “you can hide behind the screen, but your personality can’t hide.” In addition to the above-mentioned method, we have also encountered the sequential nature of the work. Therefore, we have extended our analysis with a Long Short Term Memory (LSTM) model to track down the sequential pattern of users’ writing styles. Throughout the research, we strive to automate the sockpuppet account detection system and develop tools to help the Wikipedia administration maintain the quality of articles. We tested our system on a dataset we built containing 17K accounts validated as sockpuppets. Experimental results show that our approach achieves an F1 score of 0.82 and outperforms other systems proposed in the literature. We plan to deliver our research to the Wikipedia authorities to integrate it into their existing system

    Digital astroturfing in politics: Definition, typology, and countermeasures

    Get PDF
    In recent years, several instances of political actors who created fake grassroots activity on the Internet have been uncovered. We propose to call such fake online grassroots activity digital astroturfing, and we define it as a form of manufactured, deceptive and strategic top-down activity on the Internet initiated by political actors that mimics bottom-up activity by autonomous individuals. The goal of this paper is to lay out a conceptual map of the phenomenon of digital astroturfing in politics. To that end, we introduce, first, a typology of digital astroturfing according to three dimensions (target, actor type, goals), and, second, the concept of digital astroturfing repertoires, the possible combinations of tools, venues and actions used in digital astroturfing efforts. Furthermore, we explore possible restrictive and incentivizing countermeasures against digital astroturfing. Finally, we discuss prospects for future research: Even though empirical research on digital astroturfing is difficult, it is neither impossible nor futile

    Structural Bot Detection in Social Networks

    Get PDF
    Social network platforms are a major part of toady’s life. They are usually used for entertainment, news, advertisements, and branding for businesses and individuals alike. However, use of automated accounts, also known as bots, pollute this environment and avoid having a reliable clean online world. In this work, I address the problem of detecting bots in online social networks

    Seminar Users in the Arabic Twitter Sphere

    Full text link
    We introduce the notion of "seminar users", who are social media users engaged in propaganda in support of a political entity. We develop a framework that can identify such users with 84.4% precision and 76.1% recall. While our dataset is from the Arab region, omitting language-specific features has only a minor impact on classification performance, and thus, our approach could work for detecting seminar users in other parts of the world and in other languages. We further explored a controversial political topic to observe the prevalence and potential potency of such users. In our case study, we found that 25% of the users engaged in the topic are in fact seminar users and their tweets make nearly a third of the on-topic tweets. Moreover, they are often successful in affecting mainstream discourse with coordinated hashtag campaigns.Comment: to appear in SocInfo 201

    ‘talk about a cunt with too much idle time’: Trolling feminist research

    Get PDF
    Given the growing popularity of online methods for researchers and the increasing awareness of the levels of harassment and abuse directed at women online—especially women expressing feminist views—it is critical that we address the implications of online abuse for feminist researchers. Focussing on an often hidden yet significant part of our methodological decisions and recruitment, this paper details the online abuse levelled by men’s rights activists against a research project on women’s experiences of men’s stranger intrusions in public space. It argues for the need to locate such experiences within a violence-against-women frame, extending the concept of a continuum of sexual violence. Such an extension renders visible the added labour of ‘safety work’, which forms an invisible backdrop to the methodological decisions of many feminist researchers
    • 

    corecore