1,906 research outputs found

    Using Physical and Social Sensors in Real-Time Data Streaming for Natural Hazard Monitoring and Response

    Get PDF
    Technological breakthroughs in computing over the last few decades have resulted in important advances in natural hazards analysis. In particular, integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better estimates of real-time hazard. The main goal of this work is to utilize innovative streaming algorithms for improved real-time seismic hazard analysis by integrating different data sources and processing tools into cloud applications. In streaming algorithms, a sequence of items from physical and social sensors can be processed in as little as one pass with no need to store the data locally. Massive data volumes can be analyzed in near-real time with reasonable limits on storage space, an important advantage for natural hazard analysis. Seismic hazard maps are used by policymakers to set earthquake resistant construction standards, by insurance companies to set insurance rates and by civil engineers to estimate stability and damage potential. This research first focuses on improving probabilistic seismic hazard map production. The result is a series of maps for different frequency bands at significantly increased resolution with much lower latency time that includes a range of high-resolution sensitivity tests. Second, a method is developed for real-time earthquake intensity estimation using joint streaming analysis from physical and social sensors. Automatically calculated intensity estimates from physical sensors such as seismometers use empirical relationships between ground motion and intensity, while those from social sensors employ questionaries that evaluate ground shaking levels based on personal observations. Neither is always sufficiently precise and/or timely. Results demonstrate that joint processing can significantly reduce the response time to a damaging earthquake and estimate preliminary intensity levels during the first ten minutes after an event. The combination of social media and network sensor data, in conjunction with innovative computing algorithms, provides a new paradigm for real-time earthquake detection, facilitating rapid and inexpensive risk reduction. In particular, streaming algorithms are an efficient method that addresses three major problems in hazard estimation by improving resolution, decreasing processing latency to near real-time standards and providing more accurate results through the integration of multiple data sets

    Neural Network Pattern Recognition Experiments Toward a Fully Automatic Detection of Anomalies in InSAR Time Series of Surface Deformation

    Get PDF
    We present a neural network-based method to detect anomalies in time-dependent surface deformation fields given a set of geodetic images of displacements collected from multiple viewing geometries. The presented methodology is based on a supervised classification approach using combinations of line of sight multitemporal, multi-geometry interferometric synthetic aperture radar (InSAR) time series of displacements. We demonstrate this method with a set of 170 million time series of surface deformation generated for the entire Italian territory and derived from ERS, ENVISAT, and COSMO-SkyMed Synthetic Aperture Radar satellite constellations. We create a training dataset that has been compared with independently validated data and current state-of-the-art classification techniques. Compared to state-of-the-art algorithms, the presented framework provides increased detection accuracy, precision, recall, and reduced processing times for critical infrastructure and landslide monitoring. This study highlights how the proposed approach can accelerate the anomalous points identification step by up to 147 times compared to analytical and other artificial intelligence methods and can be theoretically extended to other geodetic measurements such as GPS, leveling data, or extensometers. Our results indicate that the proposed approach would make the anomaly identification post-processing times negligible when compared to the InSAR time-series processing

    Analyzing Twitter Feeds to Facilitate Crises Informatics and Disaster Response During Mass Emergencies

    Get PDF
    It is a common practice these days for general public to use various micro-blogging platforms, predominantly Twitter, to share ideas, opinions and information about things and life. Twitter is also being increasingly used as a popular source of information sharing during natural disasters and mass emergencies to update and communicate the extent of the geographic phenomena, report the affected population and casualties, request or provide volunteering services and to share the status of disaster recovery process initiated by humanitarian-aid and disaster-management organizations. Recent research in this area has affirmed the potential use of such social media data for various disaster response tasks. Even though the availability of social media data is massive, open and free, there is a significant limitation in making sense of this data because of its high volume, variety, velocity, value, variability and veracity. The current work provides a comprehensive framework of text processing and analysis performed on several thousands of tweets shared on Twitter during natural disaster events. Specifically, this work em- ploys state-of-the-art machine learning techniques from natural language processing on tweet content to process the ginormous data generated at the time of disasters. This study shall serve as a basis to provide useful actionable information to the crises management and mitigation teams in planning and preparation of effective disaster response and to facilitate the development of future automated systems for handling crises situations

    Towards Advancing the Earthquake Forecasting by Machine Learning of Satellite Data

    Get PDF
    Earthquakes have become one of the leading causes of death from natural hazards in the last fifty years. Continuous efforts have been made to understand the physical characteristics of earthquakes and the interaction between the physical hazards and the environments so that appropriate warnings may be generated before earthquakes strike. However, earthquake forecasting is not trivial at all. Reliable forecastings should include the analysis and the signals indicating the coming of a significant quake. Unfortunately, these signals are rarely evident before earthquakes occur, and therefore it is challenging to detect such precursors in seismic analysis. Among the available technologies for earthquake research, remote sensing has been commonly used due to its unique features such as fast imaging and wide image-acquisition range. Nevertheless, early studies on pre-earthquake and remote-sensing anomalies are mostly oriented towards anomaly identification and analysis of a single physical parameter. Many analyses are based on singular events, which provide a lack of understanding of this complex natural phenomenon because usually, the earthquake signals are hidden in the environmental noise. The universality of such analysis still is not being demonstrated on a worldwide scale. In this paper, we investigate physical and dynamic changes of seismic data and thereby develop a novel machine learning method, namely Inverse Boosting Pruning Trees (IBPT), to issue short-term forecast based on the satellite data of 1371 earthquakes of magnitude six or above due to their impact on the environment. We have analyzed and compared our proposed framework against several states of the art machine learning methods using ten different infrared and hyperspectral measurements collected between 2006 and 2013. Our proposed method outperforms all the six selected baselines and shows a strong capability in improving the likelihood of earthquake forecasting across different earthquake databases
    corecore