12 research outputs found
Quantifying How Hateful Communities Radicalize Online Users
While online social media offers a way for ignored or stifled voices to be
heard, it also allows users a platform to spread hateful speech. Such speech
usually originates in fringe communities, yet it can spill over into mainstream
channels. In this paper, we measure the impact of joining fringe hateful
communities in terms of hate speech propagated to the rest of the social
network. We leverage data from Reddit to assess the effect of joining one type
of echo chamber: a digital community of like-minded users exhibiting hateful
behavior. We measure members' usage of hate speech outside the studied
community before and after they become active participants. Using Interrupted
Time Series (ITS) analysis as a causal inference method, we gauge the spillover
effect, in which hateful language from within a certain community can spread
outside that community by using the level of out-of-community hate word usage
as a proxy for learned hate. We investigate four different Reddit
sub-communities (subreddits) covering three areas of hate speech: racism,
misogyny and fat-shaming. In all three cases we find an increase in hate speech
outside the originating community, implying that joining such community leads
to a spread of hate speech throughout the platform. Moreover, users are found
to pick up this new hateful speech for months after initially joining the
community. We show that the harmful speech does not remain contained within the
community. Our results provide new evidence of the harmful effects of echo
chambers and the potential benefit of moderating them to reduce adoption of
hateful speech
No Love Among Haters: Negative Interactions Reduce Hate Community Engagement
While online hate groups pose significant risks to the health of online
platforms and safety of marginalized groups, little is known about what causes
users to become active in hate groups and the effect of social interactions on
furthering their engagement. We address this gap by first developing tools to
find hate communities within Reddit, and then augment 11 subreddits extracted
with 14 known hateful subreddits (25 in total). Using causal inference methods,
we evaluate the effect of replies on engagement in hateful subreddits by
comparing users who receive replies to their first comment (the treatment) to
equivalent control users who do not. We find users who receive replies are less
likely to become engaged in hateful subreddits than users who do not, while the
opposite effect is observed for a matched sample of similar-sized non-hateful
subreddits. Using the Google Perspective API and VADER, we discover that
hateful community first-repliers are more toxic, negative, and attack the
posters more often than non-hateful first-repliers. In addition, we uncover a
negative correlation between engagement and attacks or toxicity of
first-repliers. We simulate the cumulative engagement of hateful and
non-hateful subreddits under the contra-positive scenario of friendly
first-replies, finding that attacks dramatically reduce engagement in hateful
subreddits. These results counter-intuitively imply that, although
under-moderated communities allow hate to fester, the resulting environment is
such that direct social interaction does not encourage further participation,
thus endogenously constraining the harmful role that these communities could
play as recruitment venues for antisocial beliefs.Comment: 13 pages, 5 figures, 2 table
Auditing Elon Musk's Impact on Hate Speech and Bots
On October 27th, 2022, Elon Musk purchased Twitter, becoming its new CEO and
firing many top executives in the process. Musk listed fewer restrictions on
content moderation and removal of spam bots among his goals for the platform.
Given findings of prior research on moderation and hate speech in online
communities, the promise of less strict content moderation poses the concern
that hate will rise on Twitter. We examine the levels of hate speech and
prevalence of bots before and after Musk's acquisition of the platform. We find
that hate speech rose dramatically upon Musk purchasing Twitter and the
prevalence of most types of bots increased, while the prevalence of astroturf
bots decreased.Comment: 3 figures, 1 tabl
Massive Multi-Agent Data-Driven Simulations of the GitHub Ecosystem
Simulating and predicting planetary-scale techno-social systems poses heavy
computational and modeling challenges. The DARPA SocialSim program set the
challenge to model the evolution of GitHub, a large collaborative
software-development ecosystem, using massive multi-agent simulations. We
describe our best performing models and our agent-based simulation framework,
which we are currently extending to allow simulating other planetary-scale
techno-social systems. The challenge problem measured participant's ability,
given 30 months of meta-data on user activity on GitHub, to predict the next
months' activity as measured by a broad range of metrics applied to ground
truth, using agent-based simulation. The challenge required scaling to a
simulation of roughly 3 million agents producing a combined 30 million actions,
acting on 6 million repositories with commodity hardware. It was also important
to use the data optimally to predict the agent's next moves. We describe the
agent framework and the data analysis employed by one of the winning teams in
the challenge. Six different agent models were tested based on a variety of
machine learning and statistical methods. While no single method proved the
most accurate on every metric, the broadly most successful sampled from a
stationary probability distribution of actions and repositories for each agent.
Two reasons for the success of these agents were their use of a distinct
characterization of each agent, and that GitHub users change their behavior
relatively slowly
COVID-19 Vaccine Hesitancy on Social Media: Building a Public Twitter Data Set of Antivaccine Content, Vaccine Misinformation, and Conspiracies
BackgroundFalse claims about COVID-19 vaccines can undermine public trust in ongoing vaccination campaigns, posing a threat to global public health. Misinformation originating from various sources has been spreading on the web since the beginning of the COVID-19 pandemic. Antivaccine activists have also begun to use platforms such as Twitter to promote their views. To properly understand the phenomenon of vaccine hesitancy through the lens of social media, it is of great importance to gather the relevant data.
ObjectiveIn this paper, we describe a data set of Twitter posts and Twitter accounts that publicly exhibit a strong antivaccine stance. The data set is made available to the research community via our AvaxTweets data set GitHub repository. We characterize the collected accounts in terms of prominent hashtags, shared news sources, and most likely political leaning.
MethodsWe started the ongoing data collection on October 18, 2020, leveraging the Twitter streaming application programming interface (API) to follow a set of specific antivaccine-related keywords. Then, we collected the historical tweets of the set of accounts that engaged in spreading antivaccination narratives between October 2020 and December 2020, leveraging the Academic Track Twitter API. The political leaning of the accounts was estimated by measuring the political bias of the media outlets they shared.
ResultsWe gathered two curated Twitter data collections and made them publicly available: (1) a streaming keyword–centered data collection with more than 1.8 million tweets, and (2) a historical account–level data collection with more than 135 million tweets. The accounts engaged in the antivaccination narratives lean to the right (conservative) direction of the political spectrum. The vaccine hesitancy is fueled by misinformation originating from websites with already questionable credibility.
ConclusionsThe vaccine-related misinformation on social media may exacerbate the levels of vaccine hesitancy, hampering progress toward vaccine-induced herd immunity, and could potentially increase the number of infections related to new COVID-19 variants. For these reasons, understanding vaccine hesitancy through the lens of social media is of paramount importance. Because data access is the first obstacle to attain this goal, we published a data set that can be used in studying antivaccine misinformation on social media and enable a better understanding of vaccine hesitancy
Characterizing social media manipulation in the 2020 U.S. presidential election
Democracies are postulated upon the ability to carry out fair elections, free from any form of interference or manipulation. Social media have been reportedly used to distort public opinion nearing election events in the United States and beyond. With over 240 million election-related tweets recorded between 20 June and 9 September 2020, in this study we chart the landscape of social media manipulation in the context of the upcoming 3 November 2020 U.S. presidential election. We focus on characterizing two salient dimensions of social media manipulation, namely (i) automation (e.g., the prevalence of bots), and (ii) distortion (e.g., manipulation of narratives, injection of conspiracies or rumors). Despite being outnumbered by several orders of magnitude, just a few thousands of bots generated spikes of conversations around real-world political events in all comparable with the volume of activity of humans. We discover that bots also exacerbate the consumption of content produced by users with their same political views, worsening the issue of political echo chambers. Furthermore, coordinated efforts carried out by Russia, China and other countries are hereby characterized. Finally, we draw a clear connection between bots, hyper-partisan media outlets, and conspiracy groups, suggesting the presence of systematic efforts to distort political narratives and propagate disinformation. Our findings may have impactful implications, shedding light on different forms of social media manipulation that may, altogether, ultimately pose a risk to the integrity of the election
Auditing Elon Musk’s Impact on Hate Speech and Bots
On October 27th, 2022, Elon Musk purchased Twitter, becoming its new CEO and firing many top executives in the process. Musk listed fewer restrictions on content moderation and removal of spam bots among his goals for the platform. Given findings of prior research on moderation and hate speech in online communities, the promise of less strict content moderation poses the concern that hate will rise on Twitter. We examine the levels of hate speech and prevalence of bots before and after Musk's acquisition of the platform. We find that hate speech rose dramatically upon Musk purchasing Twitter and the prevalence of most types of bots increased, while the prevalence of astroturf bots decreased