14 research outputs found

    Two-Sample Testing for Event Impacts in Time Series

    Full text link
    In many application domains, time series are monitored to detect extreme events like technical faults, natural disasters, or disease outbreaks. Unfortunately, it is often non-trivial to select both a time series that is informative about events and a powerful detection algorithm: detection may fail because the detection algorithm is not suitable, or because there is no shared information between the time series and the events of interest. In this work, we thus propose a non-parametric statistical test for shared information between a time series and a series of observed events. Our test allows identifying time series that carry information on event occurrences without committing to a specific event detection methodology. In a nutshell, we test for divergences of the value distributions of the time series at increasing lags after event occurrences with a multiple two-sample testing approach. In contrast to related tests, our approach is applicable for time series over arbitrary domains, including multivariate numeric, strings or graphs. We perform a large-scale simulation study to show that it outperforms or is on par with related tests on our task for univariate time series. We also demonstrate the real-world applicability of our approach on datasets from social media and smart home environments.Comment: SIAM International Conference on Data Mining (SDM 2020) preprint, source code and supplementary material is available at https://github.com/diozaka/eites

    Beyond data collection: Objectives and methods of research using VGI and geo-social media for disaster management

    Get PDF
    This paper investigates research using VGI and geo-social media in the disaster management context. Relying on the method of systematic mapping, it develops a classification schema that captures three levels of main category, focus, and intended use, and analyzes the relationships with the employed data sources and analysis methods. It focuses the scope to the pioneering field of disaster management, but the described approach and the developed classification schema are easily adaptable to different application domains or future developments. The results show that a hypothesized consolidation of research, characterized through the building of canonical bodies of knowledge and advanced application cases with refined methodology, has not yet happened. The majority of the studies investigate the challenges and potential solutions of data handling, with fewer studies focusing on socio-technological issues or advanced applications. This trend is currently showing no sign of change, highlighting that VGI research is still very much technology-driven as opposed to theory- or application-driven. From the results of the systematic mapping study, the authors formulate and discuss several research objectives for future work, which could lead to a stronger, more theory-driven treatment of the topic VGI in GIScience.Carlos Granell has been partly funded by the Ramón y Cajal Programme (grant number RYC-2014-16913

    Use of the Knowledge-Based System LOG-IDEAH to Assess Failure Modes of Masonry Buildings, Damaged by L'Aquila Earthquake in 2009

    Get PDF
    This article, first, discusses the decision-making process, typically used by trained engineers to assess failure modes of masonry buildings, and then, presents the rule-based model, required to build a knowledge-based system for post-earthquake damage assessment. The acquisition of the engineering knowledge and implementation of the rule-based model lead to the developments of the knowledge-based system LOG-IDEAH (Logic trees for Identification of Damage due to Earthquakes for Architectural Heritage), a web-based tool, which assesses failure modes of masonry buildings by interpreting both crack pattern and damage severity, recorded on site by visual inspection. Assuming that failure modes detected by trained engineers for a sample of buildings are the correct ones, these are used to validate the predictions made by LOG-IDEAH. Prediction robustness of the proposed system is carried out by computing Precision and Recall measures for failure modes, predicted for a set of buildings selected in the city center of L’Aquila (Italy), damaged by an earthquake in 2009. To provide an independent meaning of verification for LOG-IDEAH, random generations of outputs are created to obtain baselines of failure modes for the same case study. For the baseline output to be compatible and consistent with the observations on site, failure modes are randomly generated with the same probability of occurrence as observed for the building samples inspected in the city center of L’Aquila. The comparison between Precision and Recall measures, calculated on the output, provided by LOG-IDEAH and predicted by random generations, underlines that the proposed knowledge-based system has a high ability to predict failure modes of masonry buildings, and has the potential to support surveyors in post-earthquake assessments

    Accuracy of a pre-trained sentiment analysis (SA) classification model on tweets related to emergency response and early recovery assessment: the case of 2019 Albanian earthquake

    Get PDF
    Traditionally, earthquake impact assessments have been made via fieldwork by non-governmental organisations (NGO's) sponsored data collection; however, this approach is time-consuming, expensive and often limited. Recently, social media (SM) has become a valuable tool for quickly collecting large amounts of first-hand data after a disaster and shows great potential for decision-making. Nevertheless, extracting meaningful information from SM is an ongoing area of research. This paper tests the accuracy of the pre-trained sentiment analysis (SA) model developed by the no-code machine learning platform MonkeyLearn using the text data related to the emergency response and early recovery phase of the three major earthquakes that struck Albania on the 26th November 2019. These events caused 51 deaths, 3000 injuries and extensive damage. We obtained 695 tweets with the hashtags: #Albania #AlbanianEarthquake, and #albanianearthquake from the 26th November 2019 to the 3rd February 2020. We used these data to test the accuracy of the pre-trained SA classification model developed by MonkeyLearn to identify polarity in text data. This test explores the feasibility to automate the classification process to extract meaningful information from text data from SM in real-time in the future. We tested the no-code machine learning platform's performance using a confusion matrix. We obtained an overall accuracy (ACC) of 63% and a misclassification rate of 37%. We conclude that the ACC of the unsupervised classification is sufficient for a preliminary assessment, but further research is needed to determine if the accuracy is improved by customising the training model of the machine learning platform

    Use of the Knowledge-Based System LOG-IDEAH to Assess Failure Modes of Masonry Buildings, Damaged by L'Aquila Earthquake in 2009

    Get PDF
    This article, first, discusses the decision-making process, typically used by trained engineers to assess failure modes of masonry buildings, and then, presents the rule-based model, required to build a knowledge-based system for post-earthquake damage assessment. The acquisition of the engineering knowledge and implementation of the rule-based model lead to the developments of the knowledge-based system LOG-IDEAH (Logic trees for Identification of Damage due to Earthquakes for Architectural Heritage), a web-based tool, which assesses failure modes of masonry buildings by interpreting both crack pattern and damage severity, recorded on site by visual inspection. Assuming that failure modes detected by trained engineers for a sample of buildings are the correct ones, these are used to validate the predictions made by LOG-IDEAH. Prediction robustness of the proposed system is carried out by computing Precision and Recall measures for failure modes, predicted for a set of buildings selected in the city center of L'Aquila (Italy), damaged by an earthquake in 2009. To provide an independent meaning of verification for LOG-IDEAH, random generations of outputs are created to obtain baselines of failure modes for the same case study. For the baseline output to be compatible and consistent with the observations on site, failure modes are randomly generated with the same probability of occurrence as observed for the building samples inspected in the city center of L'Aquila. The comparison between Precision and Recall measures, calculated on the output, provided by LOG-IDEAH and predicted by random generations, underlines that the proposed knowledge-based system has a high ability to predict failure modes of masonry buildings, and has the potential to support surveyors in post-earthquake assessments
    corecore