577 research outputs found

    $1.00 per RT #BostonMarathon #PrayForBoston: analyzing fake content on Twitter

    Get PDF
    This study found that 29% of the most viral content on Twitter during the Boston bombing crisis were rumors and fake content.AbstractOnline social media has emerged as one of the prominent channels for dissemination of information during real world events. Malicious content is posted online during events, which can result in damage, chaos and monetary losses in the real world. We analyzed one such media i.e. Twitter, for content generated during the event of Boston Marathon Blasts, that occurred on April, 15th, 2013. A lot of fake content and malicious profiles originated on Twitter network during this event. The aim of this work is to perform in-depth characterization of what factors influenced in malicious content and profiles becoming viral. Our results showed that 29% of the most viral content on Twitter, during the Boston crisis were rumors and fake content; while 51% was generic opinions and comments; and rest was true information. We found that large number of users with high social reputation and verified accounts were responsible for spreading the fake content. Next, we used regression prediction model, to verify that, overall impact of all users who propagate the fake content at a given time, can be used to estimate the growth of that content in future. Many malicious accounts were created on Twitter during the Boston event, that were later suspended by Twitter. We identified over six thousand such user profiles, we observed that the creation of such profiles surged considerably right after the blasts occurred. We identified closed community structure and star formation in the interaction network of these suspended profiles amongst themselves

    Crisis informatics: Introduction

    Get PDF

    The Structure of Citizen Bystander Offering Behaviors Immediately After the Boston Marathon Bombing

    Get PDF
    In April of 2013, two pressure cooker bombs detonated near the finish line of the Boston Marathon. The resulting crowdsourced criminal investigation has been subject to intense scrutiny. What has not been discussed are the offering behaviors of Twitter users immediately following the detonations. The hashtag #BostonHelp offers a case study of what emergent, computer-mediated groups offer victims of a crisis event. Through creative appropriation of at-hand technologies (CAAT), this emergent group organized online offering and information about tangible resources on the ground. In this case, #BostonHelp participants harnessed blogs, social media, Google Forms, and pre-existing services to organize help for those in need. The resulting structure stabilized and became a symbol of the response itself. This case study offers an analysis of the structure created by computer-mediated crowds. We conclude with a discussion of trying to design, or even detect these behaviors at the start of a crisis response

    With Help from Afar: Cross-Local Communication in an Online COVID-19 Pandemic Community

    Get PDF
    Crisis informatics research has examined geographically bounded crises, such as natural or man-made disasters, identifying the critical role of local and hyper-local information focused on one geographic area in crisis communication. The COVID-19 pandemic represents an understudied kind of crisis that simultaneously hits locales across the globe, engendering an emergent form of crisis communication, which we term cross-local communication. Cross-local communication is the exchange of crisis information between geographically dispersed locales to facilitate local crisis response. To unpack this notion, we present a qualitative study of an online migrant community of overseas Taiwanese who supported fellow Taiwanese from afar. We detail four distinctive types of cross-local communication: situational updates, risk communication, medical consultation, and coordination. We discuss how the current pandemic situation brings new understandings to crisis informatics and online health community literature, and what role digital technologies could play in supporting cross-local communication

    Priceless Tweets! A Study on Twitter Messages Posted During Crisis: Black Saturday

    Get PDF
    Twitter has been regarded as an outstanding social media application due to its immediacy in communication. Twitter has experienced exponential growth and been used for various purposes including crisis communication. However, there have been less empirical studies on Twitter messages (tweets) posted during crisis. In this paper, we analyse the tweets that were posted during Australia’s worst fire disaster - Black Saturday. We propose a new coding scheme for tweets during crisis and propose further research into how Twitter can be used as an alternative communication tool during crisis to support official communications, in particular, reflecting ground level conditions. Further, we find that tweets made during Black Saturday are laden with actionable factual information which contrasts with earlier claims that tweets are of no value made of mere random personal notes

    An Empirical Methodology for Detecting and Prioritizing Needs during Crisis Events

    Full text link
    In times of crisis, identifying the essential needs is a crucial step to providing appropriate resources and services to affected entities. Social media platforms such as Twitter contain vast amount of information about the general public's needs. However, the sparsity of the information as well as the amount of noisy content present a challenge to practitioners to effectively identify shared information on these platforms. In this study, we propose two novel methods for two distinct but related needs detection tasks: the identification of 1) a list of resources needed ranked by priority, and 2) sentences that specify who-needs-what resources. We evaluated our methods on a set of tweets about the COVID-19 crisis. For task 1 (detecting top needs), we compared our results against two given lists of resources and achieved 64% precision. For task 2 (detecting who-needs-what), we compared our results on a set of 1,000 annotated tweets and achieved a 68% F1-score
    • 

    corecore