2 research outputs found
Using Physical and Social Sensors in Real-Time Data Streaming for Natural Hazard Monitoring and Response
Technological breakthroughs in computing over the last few decades have resulted in important advances in natural hazards analysis. In particular, integration of a wide variety of information sources, including observations from spatially-referenced physical sensors and new social media sources, enables better estimates of real-time hazard. The main goal of this work is to utilize innovative streaming algorithms for improved real-time seismic hazard analysis by integrating different data sources and processing tools into cloud applications. In streaming algorithms, a sequence of items from physical and social sensors can be processed in as little as one pass with no need to store the data locally. Massive data volumes can be analyzed in near-real time with reasonable limits on storage space, an important advantage for natural hazard analysis.
Seismic hazard maps are used by policymakers to set earthquake resistant construction standards, by insurance companies to set insurance rates and by civil engineers to estimate stability and damage potential. This research first focuses on improving probabilistic seismic hazard map production. The result is a series of maps for different frequency bands at significantly increased resolution with much lower latency time that includes a range of high-resolution sensitivity tests.
Second, a method is developed for real-time earthquake intensity estimation using joint streaming analysis from physical and social sensors. Automatically calculated intensity estimates from physical sensors such as seismometers use empirical relationships between ground motion and intensity, while those from social sensors employ questionaries that evaluate ground shaking levels based on personal observations. Neither is always sufficiently precise and/or timely. Results demonstrate that joint processing can significantly reduce the response time to a damaging earthquake and estimate preliminary intensity levels during the first ten minutes after an event. The combination of social media and network sensor data, in conjunction with innovative computing algorithms, provides a new paradigm for real-time earthquake detection, facilitating rapid and inexpensive risk reduction. In particular, streaming algorithms are an efficient method that addresses three major problems in hazard estimation by improving resolution, decreasing processing latency to near real-time standards and providing more accurate results through the integration of multiple data sets
Recommended from our members
The Predictive Relationship between Earthquake Intensity and Tweets Rate for Real-Time Ground Motion Estimation
The standard measure for evaluation of the immediate effects of an earthquake on people and man-made structures is intensity. Intensity estimates are widely used for emergency response, loss estimation, and distribution of public information after earthquake occurrence (Wood and Neumann, 1931; Brazee, 1976). Modern intensity assessment procedures process a variety of information sources. Those sources are primarily from two main categories: physical sensors (seismographs and accelerometers) and social sensors (witness reports). Acquiring new data sources in the second category can help to speed up the existing procedures for intensity calculations. One potentially important data source in this category is the widespread microblogging platform Twitter, ranked ninth worldwide as of January 2016 by number of active users, similar to 320 million (Twitter, 2016). In our previous studies, empirical relationships between tweet rate and observed modified Mercalli intensity (MMI) were developed using data from the M 6.0 South Napa, California, earthquake (Napa earthquake) that occurred on 24 August 2014 (Kropivnitskaya et al., 2016). These relationships allow us to stream data from social sensors, supplementing data from other sensors to produce more accurate real-time intensity maps. In this study, we validate empirical relationships between tweet rate and observed MMI using new data sets from earthquakes that occurred in California, Japan, and Chile during March-April 2014. The statistical complexity of the validation test and calibration process is complicated by the fact that the Twitter data stream is limited for open public access, reducing the number of available tweets. In addition, in this analysis only spatially limited positive tweets (marked as a tweet about the earthquake) are incorporated into the analysis, further limiting the data set and restricting our study to a historical data set. In this work, the predictive relationship for California is recalibrated slightly, and a new set of relationships is estimated for Japan and Chile