779 research outputs found

    Towards better social crisis data with HERMES: Hybrid sensing for EmeRgency ManagEment System

    Full text link
    People involved in mass emergencies increasingly publish information-rich contents in online social networks (OSNs), thus acting as a distributed and resilient network of human sensors. In this work, we present HERMES, a system designed to enrich the information spontaneously disclosed by OSN users in the aftermath of disasters. HERMES leverages a mixed data collection strategy, called hybrid crowdsensing, and state-of-the-art AI techniques. Evaluated in real-world emergencies, HERMES proved to increase: (i) the amount of the available damage information; (ii) the density (up to 7x) and the variety (up to 18x) of the retrieved geographic information; (iii) the geographic coverage (up to 30%) and granularity

    Comparison of different machine learning techniques on location extraction by utilizing geo-tagged tweets: A case study

    Get PDF
    In emergencies, Twitter is an important platform to get situational awareness simultaneously. Therefore, information about Twitter users’ location is a fundamental aspect to understand the disaster effects. But location extraction is a challenging task. Most of the Twitter users do not share their locations in their tweets. In that respect, there are different methods proposed for location extraction which cover different fields such as statistics, machine learning, etc. This study is a sample study that utilizes geo-tagged tweets to demonstrate the importance of the location in disaster management by taking three cases into consideration. In our study, tweets are obtained by utilizing the “earthquake” keyword to determine the location of Twitter users. Tweets are evaluated by utilizing the Latent Dirichlet Allocation (LDA) topic model and sentiment analysis through machine learning classification algorithms including the Multinomial and Gaussian Naïve Bayes, Support Vector Machine (SVM), Decision Tree, Random Forest, Extra Trees, Neural Network, k Nearest Neighbor (kNN), Stochastic Gradient Descent (SGD), and Adaptive Boosting (AdaBoost) classifications. Therefore, 10 different machine learning algorithms are applied in our study by utilizing sentiment analysis based on location-specific disaster-related tweets by aiming fast and correct response in a disaster situation. In addition, the effectiveness of each algorithm is evaluated in order to gather the right machine learning algorithm. Moreover, topic extraction via LDA is provided to comprehend the situation after a disaster. The gathered results from the application of three cases indicate that Multinomial Naïve Bayes and Extra Trees machine learning algorithms give the best results with an F-measure value over 80%. The study aims to provide a quick response to earthquakes by applying the aforementioned techniques. © 2020 Elsevier Lt

    Using Physical and Social Sensors in Real-Time Data Streaming for Natural Hazard Monitoring and Response

    Get PDF
    Technological breakthroughs in computing over the last few decades have resulted in important advances in natural hazards analysis. In particular, integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better estimates of real-time hazard. The main goal of this work is to utilize innovative streaming algorithms for improved real-time seismic hazard analysis by integrating different data sources and processing tools into cloud applications. In streaming algorithms, a sequence of items from physical and social sensors can be processed in as little as one pass with no need to store the data locally. Massive data volumes can be analyzed in near-real time with reasonable limits on storage space, an important advantage for natural hazard analysis. Seismic hazard maps are used by policymakers to set earthquake resistant construction standards, by insurance companies to set insurance rates and by civil engineers to estimate stability and damage potential. This research first focuses on improving probabilistic seismic hazard map production. The result is a series of maps for different frequency bands at significantly increased resolution with much lower latency time that includes a range of high-resolution sensitivity tests. Second, a method is developed for real-time earthquake intensity estimation using joint streaming analysis from physical and social sensors. Automatically calculated intensity estimates from physical sensors such as seismometers use empirical relationships between ground motion and intensity, while those from social sensors employ questionaries that evaluate ground shaking levels based on personal observations. Neither is always sufficiently precise and/or timely. Results demonstrate that joint processing can significantly reduce the response time to a damaging earthquake and estimate preliminary intensity levels during the first ten minutes after an event. The combination of social media and network sensor data, in conjunction with innovative computing algorithms, provides a new paradigm for real-time earthquake detection, facilitating rapid and inexpensive risk reduction. In particular, streaming algorithms are an efficient method that addresses three major problems in hazard estimation by improving resolution, decreasing processing latency to near real-time standards and providing more accurate results through the integration of multiple data sets

    Real-Time Social Network Data Mining For Predicting The Path For A Disaster

    Get PDF
    Traditional communication channels like news channels are not able to provide spontaneous information about disasters unlike social networks namely, Twitter. The present research work proposes a framework by mining real-time disaster data from Twitter to predict the path a disaster like a tornado will take. The users of Twitter act as the sensors which provide useful information about the disaster by posting first-hand experience, warnings or location of a disaster. The steps involved in the framework are – data collection, data preprocessing, geo-locating the tweets, data filtering and extrapolation of the disaster curve for prediction of susceptible locations. The framework is validated by analyzing the past events. This framework has the potential to be developed into a full-fledged system to predict and warn people about disasters. The warnings can be sent to news channels or broadcasted for pro-active action

    Thematically analysing social network content during disasters through the lens of the disaster management lifecycle

    No full text
    Social Networks such as Twitter are often used for disseminating and collecting information during natural disasters. The potential for its use in Disaster Management has been acknowledged. However, more nuanced understanding of the communications that take place on social networks are required to more effectively integrate this information into the processes within disaster management. The type and value of information shared should be assessed, determining the benefits and issues, with credibility and reliability as known concerns. Mapping the tweets in relation to the modelled stages of a disaster can be a useful evaluation for determining the benefits/drawbacks of using data from social networks, such as Twitter, in disaster management.A thematic analysis of tweets' content, language and tone during the UK Storms and Floods 2013/14 was conducted. Manual scripting was used to determine the official sequence of events, and classify the stages of the disaster into the phases of the Disaster Management Lifecycle, to produce a timeline. Twenty-five topics discussed on Twitter emerged, and three key types of tweets, based on the language and tone, were identified. The timeline represents the events of the disaster, according to the Met Office reports, classed into B. Faulkner's Disaster Management Lifecycle framework. Context is provided when observing the analysed tweets against the timeline. This illustrates a potential basis and benefit for mapping tweets into the Disaster Management Lifecycle phases. Comparing the number of tweets submitted in each month with the timeline, suggests users tweet more as an event heightens and persists. Furthermore, users generally express greater emotion and urgency in their tweets.This paper concludes that the thematic analysis of content on social networks, such as Twitter, can be useful in gaining additional perspectives for disaster management. It demonstrates that mapping tweets into the phases of a Disaster Management Lifecycle model can have benefits in the recovery phase, not just in the response phase, to potentially improve future policies and activities

    Construction of a disaster-support dynamic knowledge chatbot

    Get PDF
    This dissertation is aimed at devising a disaster-support chatbot system with the capacity to enhance citizens and first responders’ resilience in disaster scenarios, by gathering and processing information from crowd-sensing sources, and informing its users with relevant knowledge about detected disasters, and how to deal with them. This system is composed of two artifacts that interact via a mediator graph-structured knowledge base. Our first artifact is a crowd-sourced disaster-related knowledge extraction system, which uses social media as a means to exploit humans behaving as sensors. It consists in a pipeline of natural language processing (NLP) tools, and a mixture of convolutional neural networks (CNNs) and lexicon-based models for classifying and extracting disasters. It then outputs the extracted information to the knowledge graph (KG), for presenting connected insights. The second artifact, the disaster-support chatbot, uses a state-of-the-art Dual Intent Entity Transformer (DIET) architecture to classify user intents, and makes use of several dialogue policies for managing user conversations, as well as storing relevant information to be used in further dialogue turns. To generate responses, the chatbot uses local and official disaster-related knowledge, and infers the knowledge graph for dynamic knowledge extracted by the first artifact. According to the achieved results, our devised system is on par with the state-of-the- art on Disaster Extraction systems. Both artifacts have also been validated by field specialists, who have considered them to be valuable assets in disaster-management.Esta dissertação visa a conceção de um sistema de chatbot de apoio a desastres, com a capacidade de aumentar a resiliência dos cidadãos e socorristas nestes cenários, através da recolha e processamento de informação de fontes de crowdsensing, e informar os seus utilizadores com conhecimentos relevantes sobre os desastres detetados, e como lidar com eles. Este sistema é composto por dois artefactos que interagem através de uma base de conhecimento baseada em grafos. O primeiro artefacto é um sistema de extração de conhecimento relacionado com desastres, que utiliza redes sociais como forma de explorar o conceito humans as sensors. Este artefacto consiste numa sequência de ferramentas de processamento de língua natural, e uma mistura de redes neuronais convolucionais e modelos baseados em léxicos, para classificar e extrair informação sobre desastres. A informação extraída é então passada para o grafo de conhecimento. O segundo artefacto, o chatbot de apoio a desastres, utiliza uma arquitetura Dual Intent Entity Transformer (DIET) para classificar as intenções dos utilizadores, e faz uso de várias políticas de diálogo para gerir as conversas, bem como armazenar informação chave. Para gerar respostas, o chatbot utiliza conhecimento local relacionado com desastres, e infere o grafo de conhecimento para extrair o conhecimento inserido pelo primeiro artefacto. De acordo com os resultados alcançados, o nosso sistema está ao nível do estado da arte em sistemas de extração de informação sobre desastres. Ambos os artefactos foram também validados por especialistas da área, e considerados um contributo significativo na gestão de desastres

    Crowdsourcing of Twitter Social Media Data to Analyze the Hail Disaster in Surabaya

    Get PDF
    The purpose of this study is to describe the use of crowdsourcing data sources from social media, especially Twitter, in carrying out an initial analysis of an extreme weather event, in this case, hail which occurred in the city of Surabaya on February 21st, 2022. The method used in this study is data mining using data sourced from Twitter with the keywords "hujan AND es" (“Rain AND Ice”) . The initial data withdrawal was carried out twice. The first only pulled data from tweets sent from the Surabaya area and its surroundings, while the second pulled data from tweets sent from all locations. Tweet count aggregation and early tweet detection were used to estimate the time of occurrence. Extracting location data from tweets is used to map the location of the incident. Adding supporting data in the form of data on weather conditions at the time of the estimated event is carried out to enrich the information and validator information obtained through crowdsourcing on social media. Meteorological analysis was carried out at the incident's time and location based on the analysis's results using social media. The supporting data used in conducting meteorological analysis are data on air temperature, air humidity, wind speed at several locations of automatic weather stations. Based on Twitter data crowdsourcing, this study's results, hail in Surabaya on February 21st, 2022, occurred at around 14:50 Local Time (LT) and was located in the western part of Surabaya

    Earthquake reconnaissance data sources, a literature review

    Get PDF
    Earthquakes are one of the most catastrophic natural phenomena. After an earthquake, earthquake reconnaissance enables effective recovery by collecting data on building damage and other impacts. This paper aims to identify state-of-the-art data sources for building damage assessment and provide guidance for more efficient data collection. We have reviewed 39 articles that indicate the sources used by different authors to collect data related to damage and post-disaster recovery progress after earthquakes between 2014 and 2021. The current data collection methods have been grouped into seven categories: fieldwork or ground surveys, omnidirectional imagery (OD), terrestrial laser scanning (TLS), remote sensing (RS), crowdsourcing platforms, social media (SM) and closed-circuit television videos (CCTV). The selection of a particular data source or collection technique for earthquake reconnaissance includes different criteria depending on what questions are to be answered by these data. We conclude that modern reconnaissance missions cannot rely on a single data source. Different data sources should complement each other, validate collected data or systematically quantify the damage. The recent increase in the number of crowdsourcing and SM platforms used to source earthquake reconnaissance data demonstrates that this is likely to become an increasingly important data source
    corecore