1,008 research outputs found
Paying for Likes? Understanding Facebook like fraud using honeypots
Facebook pages offer an easy way to reach out to a very large audience as they can easily be promoted using Facebook's advertising platform. Recently, the number of likes of a Facebook page has become a measure of its popularity and profitability, and an underground market of services boosting page likes, aka like farms, has emerged. Some reports have suggested that like farms use a network of profiles that also like other pages to elude fraud protection algorithms, however, to the best of our knowledge, there has been no systematic analysis of Facebook pages' promotion methods. This paper presents a comparative measurement study of page likes garnered via Facebook ads and by a few like farms. We deploy a set of honeypot pages, promote them using both methods, and analyze garnered likes based on likers' demographic, temporal, and social characteristics. We highlight a few interesting findings, including that some farms seem to be operated by bots and do not really try to hide the nature of their operations, while others follow a stealthier approach, mimicking regular users' behavior
Soros, Child Sacrifices, and {5G}: {U}nderstanding the Spread of Conspiracy Theories on {Web} Communities
This paper presents a multi-platform computational pipeline geared to identify social media posts discussing (known) conspiracy theories. We use 189 conspiracy claims collected by Snopes, and find 66k posts and 277k comments on Reddit, and 379k tweets discussing them. Then, we study how conspiracies are discussed on different Web communities and which ones are particularly influential in driving the discussion about them. Our analysis sheds light on how conspiracy theories are discussed and spread online, while highlighting multiple challenges in mitigating them
Electrophysiological and Behavioral Responses of Oriental Fruit Moth to the Monoterpenoid Citral Alone and in Combination With Sex Pheromone
The monoterpenoid citral synergized the electroantennogram (EAG) response of male Grapholita molesta (Busck) antennae to its main pheromone compound Z8 Ð12:OAc. The response to a 10-g pheromone stimulus increased by 32, 45, 54, 71 and 94% with the addition of 0.1, 1, 10, 100 and 1,000 g of citral, respectively. There was no detectable response to 0.1, 1, or 10 g of citral; the response to 100 and 1,000 g of citral was 31 and 79% of the response to 10 g of Z8 Ð12:OAc. In a ßight tunnel, citral affected the mate-seeking behavior of males. There was a 66% reduction in the number of males orientating by ßight to a virgin calling female when citral was emitted at 1,000 ng/min 1 cm downwind from a female. Pheromone and citral induced sensory adaptation in male antennae, but citral did not synergize the effect of pheromone. The exposure of antennae to 1 ng Z8Ð12:OAc/m3 air, 1 ng citral/m3 air, 1 ng Z8 Ð12:OAc 1 ng citral/m3 air, or to 1 ng Z8 Ð12:OAc 100 ng citral/m3 air for 15 min resulted in a similar reduction in EAG response of 47Ð63%. The exposure of males to these same treatments for 15 min had no effect on their ability to orientate to a virgin calling female in a ßight tunnel. The potential for using citral to control G. molesta by mating disruption is discusse
"Is it a {Qoincidence}?": {A} First Step Towards Understanding and Characterizing the {QAnon} Movement on {Voat.co}
Online fringe communities offer fertile grounds for users to seek and share paranoid ideas fueling suspicion of mainstream news, and outright conspiracy theories. Among these, the QAnon conspiracy theory has emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. At the same time, governments are thought to be controlled by "puppet masters," as democratically elected officials serve as a fake showroom of democracy. In this paper, we provide an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has recently captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit) to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word2vec models to identify narratives around QAnon-specific keywords, and our graph visualization shows that some of QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and "drops" by "Q." Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community
Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board
This paper presents a dataset with over 3.3M threads and 134.5M posts from the Politically Incorrect board (/pol/) of the imageboard forum 4chan, posted over a period of almost 3.5 years (June 2016-November 2019). To the best of our knowledge, this represents the largest publicly available 4chan dataset, providing the community with an archive of posts that have been permanently deleted from 4chan and are otherwise inaccessible. We augment the data with a set of additional labels, including toxicity scores and the named entities mentioned in each post. We also present a statistical analysis of the dataset, providing an overview of what researchers interested in using it can expect, as well as a simple content analysis, shedding light on the most prominent discussion topics, the most popular entities mentioned, and the toxicity level of each post. Overall, we are confident that our work will motivate and assist researchers in studying and understanding 4chan, as well as its role on the greater Web. For instance, we hope this dataset may be used for cross-platform studies of social media, as well as being useful for other types of research like natural language processing. Finally, our dataset can assist qualitative work focusing on in-depth case studies of specific narratives, events, or social theories
Mean birds: Detecting aggression and bullying on Twitter
In recent years, bullying and aggression against social media users have grown significantly, causing serious consequences to victims of all demographics. Nowadays, cyberbullying affects more than half of young social media users worldwide, suffering from prolonged and/or coordinated digital harassment. Also, tools and technologies geared to understand and mitigate it are scarce and mostly ineffective. In this paper, we present a principled and scalable approach to detect bullying and aggressive behavior on Twitter. We propose a robust methodology for extracting text, user, and network-based attributes, studying the properties of bullies and aggressors, and what features distinguish them from regular users. We find that bullies post less, participate in fewer online communities, and are less popular than normal users. Aggressors are relatively popular and tend to include more negativity in their posts. We evaluate our methodology using a corpus of 1.6M tweets posted over 3 months, and show that machine learning classification algorithms can accurately detect users exhibiting bullying and aggressive behavior, with over 90% AUC
Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying
Over the past few years, online aggression and abusive behaviors have occurred in many different forms and on a variety of platforms. In extreme cases, these incidents have evolved into hate, discrimination, and bullying, and even materialized into real-world threats and attacks against individuals or groups. In this paper, we study the Gamergate controversy. Started in August 2014 in the online gaming world, it quickly spread across various social networking platforms, ultimately leading to many incidents of cyberbullying and cyberaggression. We focus on Twitter, presenting a measurement study of a dataset of 340k unique users and 1.6M tweets to study the properties of these users, the content they post, and how they differ from random Twitter users. We find that users involved in this "Twitter war" tend to have more friends and followers, are generally more engaged and post tweets with negative sentiment, less joy, and more hate than random users. We also perform preliminary measurements on how the Twitter suspension mechanism deals with such abusive behaviors. While we focus on Gamergate, our methodology to collect and analyze tweets related to aggressive and bullying activities is of independent interest
"I'm a Professor, which isn't usually a dangerous job": Internet-facilitated Harassment and Its Impact on Researchers
While the Internet has dramatically increased the exposure that research can receive, it has also facilitated harassment against scholars. To understand the impact that these attacks can have on the work of researchers, we perform a series of systematic interviews with researchers including academics, journalists, and activists, who have experienced targeted, Internet-facilitated harassment. We provide a framework for understanding the types of harassers that target researchers, the harassment that ensues, and the personal and professional impact on individuals and academic freedom. We then study preventative and remedial strategies available, and the institutions that prevent some of these strategies from being more effective. Finally, we discuss the ethical structures that could facilitate more equitable access to participating in research without serious personal suffering
"how over is it?" Understanding the Incel Community on YouTube
YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube.
Among other things, we find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content
Characterizing the Use of Images in State-Sponsored Information Warfare Operations by {R}ussian {Tr}olls on Twitter
State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images
- …