1,684 research outputs found

    Assessing Relevance of Tweets for Risk Communication

    Get PDF
    Although Twitter is used for emergency management activities, the relevance of tweets during a hazard event is still open to debate. In this study, six different computational (i.e. Natural Language Processing) and spatiotemporal analytical approaches were implemented to assess the relevance of risk information extracted from tweets obtained during the 2013 Colorado flood event. Primarily, tweets containing information about the flooding events and its impacts were analysed. Examination of the relationships between tweet volume and its content with precipitation amount, damage extent, and official reports revealed that relevant tweets provided information about the event and its impacts rather than any other risk information that public expects to receive via alert messages. However, only 14% of the geo-tagged tweets and only 0.06% of the total fire hose tweets were found to be relevant to the event. By providing insight into the quality of social media data and its usefulness to emergency management activities, this study contributes to the literature on quality of big data. Future research in this area would focus on assessing the reliability of relevant tweets for disaster related situational awareness

    IMEXT: a method and system to extract geolocated images from Tweets - Analysis of a case study

    Get PDF
    open5noopenFrancalanci, Chiara; Guglielmino, Paolo; Montalcini, Matteo; Scalia, Gabriele; Pernici, BarbaraFrancalanci, Chiara; Guglielmino, Paolo; Montalcini, Matteo; Scalia, Gabriele; Pernici, Barbar

    Construction of a disaster-support dynamic knowledge chatbot

    Get PDF
    This dissertation is aimed at devising a disaster-support chatbot system with the capacity to enhance citizens and first responders’ resilience in disaster scenarios, by gathering and processing information from crowd-sensing sources, and informing its users with relevant knowledge about detected disasters, and how to deal with them. This system is composed of two artifacts that interact via a mediator graph-structured knowledge base. Our first artifact is a crowd-sourced disaster-related knowledge extraction system, which uses social media as a means to exploit humans behaving as sensors. It consists in a pipeline of natural language processing (NLP) tools, and a mixture of convolutional neural networks (CNNs) and lexicon-based models for classifying and extracting disasters. It then outputs the extracted information to the knowledge graph (KG), for presenting connected insights. The second artifact, the disaster-support chatbot, uses a state-of-the-art Dual Intent Entity Transformer (DIET) architecture to classify user intents, and makes use of several dialogue policies for managing user conversations, as well as storing relevant information to be used in further dialogue turns. To generate responses, the chatbot uses local and official disaster-related knowledge, and infers the knowledge graph for dynamic knowledge extracted by the first artifact. According to the achieved results, our devised system is on par with the state-of-the- art on Disaster Extraction systems. Both artifacts have also been validated by field specialists, who have considered them to be valuable assets in disaster-management.Esta dissertação visa a conceção de um sistema de chatbot de apoio a desastres, com a capacidade de aumentar a resiliência dos cidadãos e socorristas nestes cenários, através da recolha e processamento de informação de fontes de crowdsensing, e informar os seus utilizadores com conhecimentos relevantes sobre os desastres detetados, e como lidar com eles. Este sistema é composto por dois artefactos que interagem através de uma base de conhecimento baseada em grafos. O primeiro artefacto é um sistema de extração de conhecimento relacionado com desastres, que utiliza redes sociais como forma de explorar o conceito humans as sensors. Este artefacto consiste numa sequência de ferramentas de processamento de língua natural, e uma mistura de redes neuronais convolucionais e modelos baseados em léxicos, para classificar e extrair informação sobre desastres. A informação extraída é então passada para o grafo de conhecimento. O segundo artefacto, o chatbot de apoio a desastres, utiliza uma arquitetura Dual Intent Entity Transformer (DIET) para classificar as intenções dos utilizadores, e faz uso de várias políticas de diálogo para gerir as conversas, bem como armazenar informação chave. Para gerar respostas, o chatbot utiliza conhecimento local relacionado com desastres, e infere o grafo de conhecimento para extrair o conhecimento inserido pelo primeiro artefacto. De acordo com os resultados alcançados, o nosso sistema está ao nível do estado da arte em sistemas de extração de informação sobre desastres. Ambos os artefactos foram também validados por especialistas da área, e considerados um contributo significativo na gestão de desastres

    Review article: Detection of actionable tweets in crisis events

    Get PDF
    Messages on social media can be an important source of information during crisis situations. They can frequently provide details about developments much faster than traditional sources (e.g., official news) and can offer personal perspectives on events, such as opinions or specific needs. In the future, these messages can also serve to assess disaster risks. One challenge for utilizing social media in crisis situations is the reliable detection of relevant messages in a flood of data. Researchers have started to look into this problem in recent years, beginning with crowdsourced methods. Lately, approaches have shifted towards an automatic analysis of messages. A major stumbling block here is the question of exactly what messages are considered relevant or informative, as this is dependent on the specific usage scenario and the role of the user in this scenario. In this review article, we present methods for the automatic detection of crisis-related messages (tweets) on Twitter. We start by showing the varying definitions of importance and relevance relating to disasters, leading into the concept of use case-dependent actionability that has recently become more popular and is the focal point of the review paper. This is followed by an overview of existing crisis-related social media data sets for evaluation and training purposes. We then compare approaches for solving the detection problem based (1) on filtering by characteristics like keywords and location, (2) on crowdsourcing, and (3) on machine learning technique. We analyze their suitability and limitations of the approaches with regards to actionability. We then point out particular challenges, such as the linguistic issues concerning social media data. Finally, we suggest future avenues of research and show connections to related tasks, such as the subsequent semantic classification of tweets

    Authenticity of Geo-Location and Place Name in Tweets

    Get PDF
    The place name and geo-coordinates of tweets are supposed to represent the possible location of the user at the time of posting that tweet. However, our analysis over a large collection of tweets indicates that these fields may not give the correct location of the user at the time of posting that tweet. Our investigation reveals that the tweets posted through third party applications such as Instagram or Swarmapp contain the geo-coordinate of the user specified location, not his current location. Any place name can be entered by a user to be displayed on a tweet. It may not be same as his/her exact location. Our analysis revealed that around 12% of tweets contains place names which are different from their real location. The findings of this research can be used as caution while designing location-based services using social media
    corecore