4,606 research outputs found
$1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter
This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves
Crowdsourced Rumour Identification During Emergencies
When a significant event occurs, many social media users leverage platforms such as Twitter to track that event. Moreover, emergency response agencies are increasingly looking to social media as a source of real-time information about such events. However, false information and rumours are often spread during such events, which can influence public opinion and limit the usefulness of social media for emergency management. In this paper, we present an initial study into rumour identification during emergencies using crowdsourcing. In particular, through an analysis of three tweet datasets relating to emergency events from 2014, we propose a taxonomy of tweets relating to rumours. We then perform a crowdsourced labeling experiment to determine whether crowd assessors can identify rumour-related tweets and where such labeling can fail. Our results show that overall, agreement over the tweet labels produced were high (0.7634 Fleiss Kappa), indicating that crowd-based rumour labeling is possible. However, not all tweets are of equal difficulty to assess. Indeed, we show that tweets containing disputed/controversial information tend to be some of the most difficult to identify
Information spreading during emergencies and anomalous events
The most critical time for information to spread is in the aftermath of a
serious emergency, crisis, or disaster. Individuals affected by such situations
can now turn to an array of communication channels, from mobile phone calls and
text messages to social media posts, when alerting social ties. These channels
drastically improve the speed of information in a time-sensitive event, and
provide extant records of human dynamics during and afterward the event.
Retrospective analysis of such anomalous events provides researchers with a
class of "found experiments" that may be used to better understand social
spreading. In this chapter, we study information spreading due to a number of
emergency events, including the Boston Marathon Bombing and a plane crash at a
western European airport. We also contrast the different information which may
be gleaned by social media data compared with mobile phone data and we estimate
the rate of anomalous events in a mobile phone dataset using a proposed anomaly
detection method.Comment: 19 pages, 11 figure
Characterizing Attention Cascades in WhatsApp Groups
An important political and social phenomena discussed in several countries,
like India and Brazil, is the use of WhatsApp to spread false or misleading
content. However, little is known about the information dissemination process
in WhatsApp groups. Attention affects the dissemination of information in
WhatsApp groups, determining what topics or subjects are more attractive to
participants of a group. In this paper, we characterize and analyze how
attention propagates among the participants of a WhatsApp group. An attention
cascade begins when a user asserts a topic in a message to the group, which
could include written text, photos, or links to articles online. Others then
propagate the information by responding to it. We analyzed attention cascades
in more than 1.7 million messages posted in 120 groups over one year. Our
analysis focused on the structural and temporal evolution of attention cascades
as well as on the behavior of users that participate in them. We found specific
characteristics in cascades associated with groups that discuss political
subjects and false information. For instance, we observe that cascades with
false information tend to be deeper, reach more users, and last longer in
political groups than in non-political groups.Comment: Accepted as a full paper at the 11th International ACM Web Science
Conference (WebSci 2019). Please cite the WebSci versio
Social media mining under the COVID-19 context: Progress, challenges, and opportunities
Social media platforms allow users worldwide to create and share information, forging vast sensing networks that
allow information on certain topics to be collected, stored, mined, and analyzed in a rapid manner. During the
COVID-19 pandemic, extensive social media mining efforts have been undertaken to tackle COVID-19 challenges
from various perspectives. This review summarizes the progress of social media data mining studies in the
COVID-19 contexts and categorizes them into six major domains, including early warning and detection, human
mobility monitoring, communication and information conveying, public attitudes and emotions, infodemic and
misinformation, and hatred and violence. We further document essential features of publicly available COVID-19
related social media data archives that will benefit research communities in conducting replicable and repro�ducible studies. In addition, we discuss seven challenges in social media analytics associated with their potential
impacts on derived COVID-19 findings, followed by our visions for the possible paths forward in regard to social
media-based COVID-19 investigations. This review serves as a valuable reference that recaps social media mining
efforts in COVID-19 related studies and provides future directions along which the information harnessed from
social media can be used to address public health emergencies
The Ethical Risks of Analyzing Crisis Events on Social Media with Machine Learning
Social media platforms provide a continuous stream of real-time news
regarding crisis events on a global scale. Several machine learning methods
utilize the crowd-sourced data for the automated detection of crises and the
characterization of their precursors and aftermaths. Early detection and
localization of crisis-related events can help save lives and economies. Yet,
the applied automation methods introduce ethical risks worthy of investigation
- especially given their high-stakes societal context. This work identifies and
critically examines ethical risk factors of social media analyses of crisis
events focusing on machine learning methods. We aim to sensitize researchers
and practitioners to the ethical pitfalls and promote fairer and more reliable
designs.Comment: Accepted to D2R2'22: International Workshop on Data-driven Resilience
Researc
- âŚ