31 research outputs found

    The Use of Social Media by UK Local Resilience Forums

    Get PDF
    The potential uses of social media in the field of emergency preparedness, resilience and response (EPRR) are varied and interesting. The UK government have produced guidance documents for its use in the UK EPRR field but evidence of use is poorly documented and appears sporadic. This paper presents the results of a survey of Local Resilience Forums (LRF) in the UK on their use and engagement with social media. The findings suggest that the level of application of social media strategies as emergency planning or response tools varied significantly between the LRFs. While over 90percent of respondents claimed that their LRF used social media as part of their strategy, most of this use was reactive or passive, rather than proactive and systematic. The various strategies employed seem to be linked most strongly to local expertise and the existence of social media ‘champions’ rather than to the directives and guidance emerging from government

    A flexible framework for assessing the quality of crowdsourced data

    Get PDF
    Ponencias, comunicaciones y pĂłsters presentados en el 17th AGILE Conference on Geographic Information Science "Connecting a Digital Europe through Location and Place", celebrado en la Universitat Jaume I del 3 al 6 de junio de 2014.Crowdsourcing as a means of data collection has produced previously unavailable data assets and enriched existing ones, but its quality can be highly variable. This presents several challenges to potential end users that are concerned with the validation and quality assurance of the data collected. Being able to quantify the uncertainty, define and measure the different quality elements associated with crowdsourced data, and introduce means for dynamically assessing and improving it is the focus of this paper. We argue that the required quality assurance and quality control is dependent on the studied domain, the style of crowdsourcing and the goals of the study. We describe a framework for qualifying geolocated data collected from non-authoritative sources that enables assessment for specific case studies by creating a workflow supported by an ontological description of a range of choices. The top levels of this ontology describe seven pillars of quality checks and assessments that present a range of techniques to qualify, improve or reject data. Our generic operational framework allows for extension of this ontology to specific applied domains. This will facilitate quality assurance in real-time or for post-processing to validate data and produce quality metadata. It enables a system that dynamically optimises the usability value of the data captured. A case study illustrates this framework

    Knowledge Creation and Sharing with Web 2.0 Tools for Teaching and Learning Roles in So-called University 2.0

    Get PDF
    AbstractUniversities have long been accepted as major social and cultural institutions. They have been taken those responsibilities for centuries by doing research, teaching, learning and publishing in a scholarly manner. These institutions serve developments in various organisational forms such as ‘brick and mortar (traditional campus base)’, ‘click (distance-online)’, and ‘brick and click (traditional campus base with distance-online)’ types.This study aims to search new opportunities and developments brought by Web 2.0 (Social Web) technologies into university's teaching and learning roles. These innovative communication platforms encourage people to share their thoughts and experiences to collaborate thorough the interactive Social Media. Knowledge as an organizational strategic asset is distributed and created by new way of interactions within groups. Therefore universities can use Web 2.0 services in accordance with their organisational missions and strategies

    Suicide by Cop: A Secondary Analysis using Open-Source News Data

    Get PDF
    Suicide by cop (SBC) has recently become a known phenomenon in the United States, Canada, Australia and parts on the United Kingdom as another way to complete suicide (Patton & Fremouw,2015). SBC refers to when an individual creates a scenario in which law enforcement agencies are called and have to use deadly force in order to protect themselves and the people around the individual who is attempting to take their life (Mohandie, Meloy, & Collins, 2009). The term suicide by cop was originally coined by a police officer and psychologist Karl Harris in 1983 and is the term that is most commonly used today (The untold motives behind suicide-by-cop, 2015). But before that was brought to the attention many researchers used either Victim precipitated homicide, law enforcement assisted suicide, and legal intervention deaths (Patton & Fremouw, 2016). Just like there are varying ways to refer to SBC, there are also varying definitions that researchers have used to classify SBC. Geberth (1993) suggested that “officers confront an individual who has a death wish and intends to force the police into a situation where the only alternative is for them to kill him. The motivation of people bent on self-destruction ranges from the clinical to bizarre” (Geberth, 1993: p. 105). Another example of varying definitions of SBC comes from Hutson et al. (1998) in which they describe SBC as “a term used by law enforcement officer to describe an incident in which a suicidal individual intentionally engages in life-threatening and criminal behavior with a lethal weapon or what appears to be lethal weapon toward law enforcement officers or civilians to specifically provoke officers to shoot the suicidal individual in self-defense or to protect civilians” (Hutson et al., 1998: p. 665). The combination of lethal and non-lethal use of force and the presence of weapons or not will be used in the definition for this project in order to have a wider range of understanding SBC and collecting news stories that mention the above qualifications of SBC

    VGI quality control

    Get PDF
    This paper presents a framework for considering quality control of volunteered geographic information (VGI). Different issues need to be considered during the conception, acquisition and post-acquisition phases of VGI creation. This includes items such as collecting metadata on the volunteer, providing suitable training, giving corrective feedback during the mapping process and use of control data, among others. Two examples of VGI data collection are then considered with respect to this quality control framework, i.e. VGI data collection by National Mapping Agencies and by the most recent Geo-Wiki tool, a game called Cropland Capture. Although good practices are beginning to emerge, there is still the need for the development and sharing of best practice, especially if VGI is to be integrated with authoritative map products or used for calibration and/or validation of land cover in the future

    A Machine Learning Approach for Classifying Textual Data in Crowdsourcing

    Get PDF
    Crowdsourcing represents an innovative approach that allows companies to engage a diverse network of people over the internet and use their collective creativity, expertise, or workforce for completing tasks that have previously been performed by dedicated employees or contractors. However, the process of reviewing and filtering the large amount of solutions, ideas, or feedback submitted by a crowd is a latent challenge. Identifying valuable inputs and separating them from low quality contributions that cannot be used by the companies is time-consuming and cost-intensive. In this study, we build upon the principles of text mining and machine learning to partially automatize this process. Our results show that it is possible to explain and predict the quality of crowdsourced contributions based on a set of textual features. We use these textual features to train and evaluate a classification algorithm capable of automatically filtering textual contributions in crowdsourcing

    Components and Functions of Crowdsourcing Systems – A Systematic Literature Review

    Get PDF
    Many organizations are now starting to introduce crowdsourcing as a new model of business to outsource tasks, which are traditionally performed by a small group of people, to an undefined large workforce. While the utilization of crowdsourcing offers a lot of advantages, the development of the required system carries some risks, which are reduced by establishing a profound theoretical foundation. Thus, this article strives to gain a better understanding of what crowdsourcing systems are and what typical design aspects are considered in the development of such systems. In this paper, the author conducted a systematic literature review in the domain of crowdsourcing systems. As a result, 17 definitions of crowdsourcing systems were found and categorized into four perspectives: the organizational, the technical, the functional, and the human-centric. In the second part of the results, the author derived and presented components and functions that are implemented in a crowdsourcing system

    Citizen surveillance for environmental monitoring:combining the efforts of citizen science and crowdsourcing in a quantitative data framework

    Get PDF
    Citizen science and crowdsourcing have been emerging as methods to collect data for surveillance and/or monitoring activities. They could be gathered under the overarching term citizen surveillance. The discipline, however, still struggles to be widely accepted in the scientific community, mainly because these activities are not embedded in a quantitative framework. This results in an ongoing discussion on how to analyze and make useful inference from these data. When considering the data collection process, we illustrate how citizen surveillance can be classified according to the nature of the underlying observation process measured in two dimensions—the degree of observer reporting intention and the control in observer detection effort. By classifying the observation process in these dimensions we distinguish between crowdsourcing, unstructured citizen science and structured citizen science. This classification helps the determine data processing and statistical treatment of these data for making inference. Using our framework, it is apparent that published studies are overwhelmingly associated with structured citizen science, and there are well developed statistical methods for the resulting data. In contrast, methods for making useful inference from purely crowd-sourced data remain under development, with the challenges of accounting for the unknown observation process considerable. Our quantitative framework for citizen surveillance calls for an integration of citizen science and crowdsourcing and provides a way forward to solve the statistical challenges inherent to citizen-sourced data
    corecore