60 research outputs found

    Crowdsourced Rumour Identification During Emergencies

    Get PDF
    When a significant event occurs, many social media users leverage platforms such as Twitter to track that event. Moreover, emergency response agencies are increasingly looking to social media as a source of real-time information about such events. However, false information and rumours are often spread during such events, which can influence public opinion and limit the usefulness of social media for emergency management. In this paper, we present an initial study into rumour identification during emergencies using crowdsourcing. In particular, through an analysis of three tweet datasets relating to emergency events from 2014, we propose a taxonomy of tweets relating to rumours. We then perform a crowdsourced labeling experiment to determine whether crowd assessors can identify rumour-related tweets and where such labeling can fail. Our results show that overall, agreement over the tweet labels produced were high (0.7634 Fleiss Kappa), indicating that crowd-based rumour labeling is possible. However, not all tweets are of equal difficulty to assess. Indeed, we show that tweets containing disputed/controversial information tend to be some of the most difficult to identify

    Prediction and Analysis of Rumour's Impact on Social Media

    Full text link

    Humanitarianism 2.0

    Get PDF
    It is difficult to overstate the importance of trust in a world where global networks facilitate the constant flow of contradictory information. The search for verifiable leads and trusted sources is a central facet of daily communication and is becoming more so as our connections with one another become more decontextualised, geographically distant and, increasingly entirely virtual. The swell of internet connection rates across the world has meant an explosion of interaction and allowed new opportunities for global collective action. Whilst countless words have been written exploring the dangers of this global network and the threats that “new media” represents to social structures and moral fabrics, this collection seeks to explore the role that new social technologies are having in the world of humanitarianism and conflict response

    Crisis Crowdsourcing in Government: Characterising efforts by North American Agencies to Inform Emergency Management Operations

    Get PDF
    Crowdsourcing is proven to be a useful communication platform during and in the direct aftermath of a disastrous event. While previous research in crisis crowdsourcing demonstrates its wide adoption for aiding response efforts, this research is generally limited to adoption by non-government organizations and members of the general public, and not government agencies. There is a gap in understanding the state of crowdsourcing by governments for emergency management. Additionally, there is a noticeable focus on the application of crowdsourcing in the response and recovery of a given disaster, with less attention paid to mitigation and preparedness. This research aims to classify the use of government crisis crowdsourcing in all phases of the disaster management cycle in Canada and the USA and identify the barriers and constraints faced by Canadian government agencies when adopting crisis crowdsourcing and social media for emergency management. Semi-structured interviews conducted with 22 government officials from Canada and the USA at the various levels of government in both countries reveal that crisis crowdsourced information has a place in all phases of the disaster management cycle, though direct crowdsourcing has yet to be applied in the pre-disaster phases. Participating federal agencies appear to be using crowdsourced information for mitigation and preparedness efforts, while the lower-tiered agencies are using crowdsourcing for direct response and recovery. A more in-depth analysis into the barriers and constraints faced by participating Canadian agencies looking to adopt crisis crowdsourcing or social media for emergency management reveals three general areas of concern that may be hindering crisis crowdsourcing efforts in Canada: organizational factors, demographic factors, and hazard risk. Based on these three general areas of concern, a readiness assessment scheme is presented to allow agencies to pinpoint the most prevalent barriers to their crowdsourcing efforts and to formulate plans to address these barriers

    Social networking, social media and complex emergencies: issues paper

    Get PDF
    Andrew Skuse and Tait Brimacomb

    A review of volunteered geographic information for disaster management

    Get PDF
    The immediacy of locational information requirements and importance of data currency for natural disaster events highlights the value of volunteered geographic information (VGI) in all stages of disaster management, including prevention, preparation, response, and recovery. The practice of private citizens generating online geospatial data presents new opportunities for the creation and dissemination of disaster-related geographic data from a dense network of intelligent observers. VGI technologies enable rapid sharing of diverse geographic information for disaster management at a fraction of the resource costs associated with traditional data collection and dissemination, but they also present new challenges. These include a lack of data quality assurance and issues surrounding data management, liability, security, and the digital divide. There is a growing need for researchers to explore and understand the implications of these data and data practices for disaster management. In this article, we review the current state of knowledge in this emerging field and present recommendations for future research. Significantly, we note further research is warranted in the pre-event phases of disaster management, where VGI may present an opportunity to connect and engage individuals in disaster preparation and strengthen community resilience to potential disaster events. Our investigation of VGI for disaster management provides broader insight into key challenges and impacts of VGI on geospatial data practices and the wider field of geographical science

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans

    Full text link
    A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web's false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web

    Geospatial crowdsourced data fitness analysis for spatial data infrastructure based disaster management actions

    Get PDF
    The reporting of disasters has changed from official media reports to citizen reporters who are at the disaster scene. This kind of crowd based reporting, related to disasters or any other events, is often identified as 'Crowdsourced Data' (CSD). CSD are freely and widely available thanks to the current technological advancements. The quality of CSD is often problematic as it is often created by the citizens of varying skills and backgrounds. CSD is considered unstructured in general, and its quality remains poorly defined. Moreover, the CSD's location availability and the quality of any available locations may be incomplete. The traditional data quality assessment methods and parameters are also often incompatible with the unstructured nature of CSD due to its undocumented nature and missing metadata. Although other research has identified credibility and relevance as possible CSD quality assessment indicators, the available assessment methods for these indicators are still immature. In the 2011 Australian floods, the citizens and disaster management administrators used the Ushahidi Crowd-mapping platform and the Twitter social media platform to extensively communicate flood related information including hazards, evacuations, help services, road closures and property damage. This research designed a CSD quality assessment framework and tested the quality of the 2011 Australian floods' Ushahidi Crowdmap and Twitter data. In particular, it explored a number of aspects namely, location availability and location quality assessment, semantic extraction of hidden location toponyms and the analysis of the credibility and relevance of reports. This research was conducted based on a Design Science (DS) research method which is often utilised in Information Science (IS) based research. Location availability of the Ushahidi Crowdmap and the Twitter data assessed the quality of available locations by comparing three different datasets i.e. Google Maps, OpenStreetMap (OSM) and Queensland Department of Natural Resources and Mines' (QDNRM) road data. Missing locations were semantically extracted using Natural Language Processing (NLP) and gazetteer lookup techniques. The Credibility of Ushahidi Crowdmap dataset was assessed using a naive Bayesian Network (BN) model commonly utilised in spam email detection. CSD relevance was assessed by adapting Geographic Information Retrieval (GIR) relevance assessment techniques which are also utilised in the IT sector. Thematic and geographic relevance were assessed using Term Frequency – Inverse Document Frequency Vector Space Model (TF-IDF VSM) and NLP based on semantic gazetteers. Results of the CSD location comparison showed that the combined use of non-authoritative and authoritative data improved location determination. The semantic location analysis results indicated some improvements of the location availability of the tweets and Crowdmap data; however, the quality of new locations was still uncertain. The results of the credibility analysis revealed that the spam email detection approaches are feasible for CSD credibility detection. However, it was critical to train the model in a controlled environment using structured training including modified training samples. The use of GIR techniques for CSD relevance analysis provided promising results. A separate relevance ranked list of the same CSD data was prepared through manual analysis. The results revealed that the two lists generally agreed which indicated the system's potential to analyse relevance in a similar way to humans. This research showed that the CSD fitness analysis can potentially improve the accuracy, reliability and currency of CSD and may be utilised to fill information gaps available in authoritative sources. The integrated and autonomous CSD qualification framework presented provides a guide for flood disaster first responders and could be adapted to support other forms of emergencies
    • 

    corecore