3,787 research outputs found

    Predicting catastrophes: the role of criticality

    Full text link
    Is prediction feasible in systems at criticality? While conventional scale-invariant arguments suggest a negative answer, evidence from simulation of driven-dissipative systems and real systems such as ruptures in material and crashes in the financial market have suggested otherwise. In this dissertation, I address the question of predictability at criticality by investigating two non-equilibrium systems: a driven-dissipative system called the OFC model which is used to describe earthquakes and damage spreading in the Ising model. Both systems display a phase transition at the critical point. By using machine learning, I show that in the OFC model, scaling events are indistinguishable from one another and only the large, non-scaling events are distinguishable from the small, scaling events. I also show that as the critical point is approached, predictability falls. For damage spreading in the Ising model, the opposite behavior is seen: the accuracy of predicting whether damage will spread or heal increases as the critical point is approached. I will also use machine learning to understand what are the useful precursors to the prediction problem

    Interpretable classification and summarization of crisis events from microblogs

    Get PDF
    The widespread use of social media platforms has created convenient ways to obtain and spread up-to-date information during crisis events such as disasters. Time-critical analysis of crisis-related information helps humanitarian organizations and governmental bodies gain actionable information and plan for aid response. However, situational information is often immersed in a high volume of irrelevant content. Moreover, crisis-related messages also vary greatly in terms of information types, ranging from general situational awareness - such as information about warnings, infrastructure damages, and casualties - to individual needs. Different humanitarian organizations or governmental bodies usually demand information of different types for various tasks such as crisis preparation, resource planning, and aid response. To cope with information overload and efficiently support stakeholders in crisis situations, it is necessary to (a) classify data posted during crisis events into fine-grained humanitarian categories, (b) summarize the situational data in near real-time. In this thesis, we tackle the aforementioned problems and propose novel methods for the classification and summarization of user-generated posts from microblogs. Previous studies have introduced various machine learning techniques to assist humanitarian or governmental bodies, but they primarily focused on model performance. Unlike those works, we develop interpretable machine-learning models which can provide explanations of model decisions. Generally, we focus on three methods for reducing information overload in crisis situations: (i) post classification, (ii) post summarization, (iii) interpretable models for post classification and summarization. We evaluate our methods using posts from the microblogging platform Twitter, so-called tweets. First, we expand publicly available labeled datasets with rationale annotations. Each tweet is annotated with a class label and rationales, which are short snippets from the tweet to explain its assigned label. Using the data, we develop trustworthy classification methods that give the best tradeoff between model performance and interoperability. Rationale snippets usually convey essential information in the tweets. Hence, we propose an integer linear programming-based summarization method that maximizes the coverage of rationale phrases to generate summaries of class-level tweet data. Next, we introduce an approach that can enhance latent embedding representations of tweets in vector space. Our approach helps improve the classification performance-interpretability tradeoff and detect near duplicates for designing a summarization model with low computational complexity. Experiments show that rationale labels are helpful for developing interpretable-by-design models. However, annotations are not always available, especially in real-time situations for new tasks and crisis events. In the last part of the thesis, we propose a two-stage approach to extract the rationales under minimal human supervision

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Artificial intelligence to detect and forecast earthquakes

    Get PDF
    Precursors to large earthquakes have been widely but not systematically identified. The ability of deep neural networks to solve complex tasks that involve generalisations makes them highly suited to earthquake and precursor detection. Large moment magnitude (Mw) earthquakes and associated tsunamis can have a huge economic and social impact. Detecting precursors could significantly improve seismic hazard preparedness, particularly if precursors can assist, within a more general probabilistic forecasting framework, in reducing the uncertainty interval on expected earthquakes’ timing, location and Mw. Additionally, artificial intelligence has recently been used to improve the detection and location of smaller earthquakes, assisting in the completion and automation of seismic catalogues. This paper is the first to present a deep learning-based solution for detecting and identifying short-term changes in the raw seismic signal, correlated to earthquake occurrence. Deep neural networks (DNNs) were employed to investigate the background seismic signal prior to 31 Mw >= 6 earthquakes in the Japan region. Instantaneous, precursor-related features (features correlated to the investigated earthquakes) were detected as opposed to predicting future values based on previously observed values in the case of time series forecasting. The network achieved a 98% train accuracy and a 96% test accuracy classifying noise unrelated to Mw >= 6 earthquakes from signal immediately prior to the investigated earthquakes. Additionally, the precursor-related features became increasingly systematic (more frequently detected prior to the investigated earthquakes) with earthquake proximity. Discriminative features appeared most dominant over a frequency range of ~ 0.1-0.9 Hz, coinciding with microseismic noise and recent observations of broadband slow earthquake signal (Masuda et al. 2020). In particular, frequencies of ~ 0.16 and ~ 0.21 Hz provided significant precursor-related information. Deep learning successfully detected features of the seismic data correlated to earthquake occurrence. Developing a better understanding of the origin of the precursor-related features and their reliability is the next step towards establishing an earthquake forecasting system
    • …
    corecore