2,416 research outputs found

    Social Media for Cities, Counties and Communities

    Get PDF
    Social media (i.e., Twitter, Facebook, Flickr, YouTube) and other tools and services with user- generated content have made a staggering amount of information (and misinformation) available. Some government officials seek to leverage these resources to improve services and communication with citizens, especially during crises and emergencies. Yet, the sheer volume of social data streams generates substantial noise that must be filtered. Potential exists to rapidly identify issues of concern for emergency management by detecting meaningful patterns or trends in the stream of messages and information flow. Similarly, monitoring these patterns and themes over time could provide officials with insights into the perceptions and mood of the community that cannot be collected through traditional methods (e.g., phone or mail surveys) due to their substantive costs, especially in light of reduced and shrinking budgets of governments at all levels. We conducted a pilot study in 2010 with government officials in Arlington, Virginia (and to a lesser extent representatives of groups from Alexandria and Fairfax, Virginia) with a view to contributing to a general understanding of the use of social media by government officials as well as community organizations, businesses and the public. We were especially interested in gaining greater insight into social media use in crisis situations (whether severe or fairly routine crises, such as traffic or weather disruptions)

    Engineering Crowdsourced Stream Processing Systems

    Full text link
    A crowdsourced stream processing system (CSP) is a system that incorporates crowdsourced tasks in the processing of a data stream. This can be seen as enabling crowdsourcing work to be applied on a sample of large-scale data at high speed, or equivalently, enabling stream processing to employ human intelligence. It also leads to a substantial expansion of the capabilities of data processing systems. Engineering a CSP system requires the combination of human and machine computation elements. From a general systems theory perspective, this means taking into account inherited as well as emerging properties from both these elements. In this paper, we position CSP systems within a broader taxonomy, outline a series of design principles and evaluation metrics, present an extensible framework for their design, and describe several design patterns. We showcase the capabilities of CSP systems by performing a case study that applies our proposed framework to the design and analysis of a real system (AIDR) that classifies social media messages during time-critical crisis events. Results show that compared to a pure stream processing system, AIDR can achieve a higher data classification accuracy, while compared to a pure crowdsourcing solution, the system makes better use of human workers by requiring much less manual work effort

    Networked volunteering during the 2013 Sardinian floods

    Get PDF
    The article describes how ordinary citizens used Twitter as an emergency-management tool during the heavy floods that occurred in Sardinia, Italy, in November 2013. The case study constitutes an example of digital volunteering in the aftermath of a disaster event. The article applies the connective action framework (Bennet & Segerberg, 2012) for a deeper understanding of the dynamics of self-organized disaster communication activities on social media. Utilizing a dataset of 93,091 tweets that used the hashtag #allertameteoSAR (weather alert in Sardinia), the analysis focuses on: 1) the roles and patterns of influence among the main actors; and 2) the strategies for a peer ‘curation’ and sharing of a disaster-recovery oriented communication. The article highlights the role of Twitter celebrities and engaged ordinary users as digital volunteers and explains how they succeeded in activating bottom-up disaster-relief oriented communication

    How can Big Data from Social Media be used in Emergency Management? A case study of Twitter during the Paris attacks

    Get PDF
    Postponed access: the file will be accessible after 2019-06-11Over the past years, social media have impacted emergency management and disaster response in numerous ways. The access to live, continuous updates from the public brings new opportunities when it comes to detecing, coordinating and aiding in an emergency situation. The thesis present a research of social media during an emergency situation. The goal of the study is to discover how data from social media can be used for emergency management and determine if existing analysis services can be proven useful for the same occasion. To achieve the goal, a dataset from Twitter during the Paris attacks 2015 was collected. The dataset was analyzed using three different analysis tools; IBM Watson Discovery service, Microsoft Azure Text Analytics and an own developed Keyword Frequency Script. The results indicate that data from social media can be used for emergency management, in form of detecting and providing important information. Additional testing with larger datasets is needed to fully demonstrate the usefulness, in addition to interviews with emergency responders and social media users.Masteroppgave i informasjonsvitenskapINFO39

    Crowdsourcing and the folksonomy of emergency response: the construction of a mediated subject

    Get PDF
    This article explores the role of digital platforms in the involvement of citizens in disaster response, relying on an analysis of metadata and of the structure of classification. It adopts the analytical apparatus of Cultural-Historical Activity Theory (Vygotsky, Leontiev, Engeström) and the notion of governmentality (Foucault) in order to conduct a critical comparative analysis of how crowdsourcing platforms construct the relationship between citizens and disasters. The article identifies three regimes of classification (informing, alerting and engagement) and explores the structures of classification for mobilization of citizens’ resources. The notion of governmentality allows us to identify the struggle around the structure of classification as a struggle between the institutional actors interested in controlling citizens’ resources and those actors who are interested in citizen engagement and the synergy between independent and institutional actors as a part of the disaster response. The article suggests the notion of the folksonomy of activity, identifying situations where citizens are able to participate in the definition of their relationships with disaster through participating in classification. It also discusses the visibility of classification and the generativity of classification as a part of citizen–disaster (subject–object) relationships

    Finetuning Pre-Trained Language Models for Sentiment Classification of COVID19 Tweets

    Get PDF
    It is a common practice in today’s world for the public to use different micro-blogging and social networking platforms, predominantly Twitter, to share opinions, ideas, news, and information about many things in life. Twitter is also becoming a popular channel for information sharing during pandemic outbreaks and disaster events. The world has been suffering from economic crises ever since COVID-19 cases started to increase rapidly since January 2020. The virus has killed more than 800 thousand people ever since the discovery as per the statistics from Worldometer [1] which is the authorized tracking website. So many researchers around the globe are researching into this new virus from different perspectives. One such area is analysing micro-blogging sites like twitter to understand public sentiments. Traditional sentiment analysis methods require complex feature engineering. Many embedding representations have come these days but, their context-independent nature limits their representative power in rich context, due to which performance gets degraded in NLP tasks. Transfer learning has gained the popularity and pretrained language models like BERT(bi-directional Encoder Representations from Transformers) and XLNet which is a Generalised autoregressive model have started overtaking traditional machine learning and deep learning models like Random Forests, Naïve Bayes, Convolutional Neural Networks etc. Despite the great performance results by pretrained language models, it has been observed that finetuning a large pretrained model on downstream task with less training instances is prone to degrade the performance of the model. This research is based on a regularization technique called Mixout proposed by Lee (Lee, 2020). Mixout stochastically mixes the parameters of vanilla network and dropout network. This work is to understand the performance variations of finetuning BERT and XLNet base models on COVID-19 tweets by using Mixout regularization for sentiment classification
    • …
    corecore