4,475 research outputs found

    Probabilistic prediction of rupture length, slip and seismic ground motions for an ongoing rupture: implications for early warning for large earthquakes

    Get PDF
    Earthquake EarlyWarning (EEW) predicts future ground shaking based on presently available data. Long ruptures present the best opportunities for EEW since many heavily shaken areas are distant from the earthquake epicentre and may receive long warning times. Predicting the shaking from large earthquakes, however, requires some estimate of the likelihood of the future evolution of an ongoing rupture. An EEW system that anticipates future rupture using the present magnitude (or rupture length) together with the Gutenberg-Richter frequencysize statistics will likely never predict a large earthquake, because of the rare occurrence of ‘extreme events’. However, it seems reasonable to assume that large slip amplitudes increase the probability for evolving into a large earthquake. To investigate the relationship between the slip and the eventual size of an ongoing rupture, we simulate suites of 1-D rupture series from stochastic models of spatially heterogeneous slip. We find that while large slip amplitudes increase the probability for the continuation of a rupture and the possible evolution into a ‘Big One’, the recognition that rupture is occurring on a spatially smooth fault has an even stronger effect.We conclude that anEEWsystem for large earthquakes needs some mechanism for the rapid recognition of the causative fault (e.g., from real-time GPS measurements) and consideration of its ‘smoothness’. An EEW system for large earthquakes on smooth faults, such as the San Andreas Fault, could be implemented in two ways: the system could issue a warning, whenever slip on the fault exceeds a few metres, because the probability for a large earthquake is high and strong shaking is expected to occur in large areas around the fault. A more sophisticated EEW system could use the present slip on the fault to estimate the future slip evolution and final rupture dimensions, and (using this information) could provide probabilistic predictions of seismic ground motions along the evolving rupture. The decision on whether an EEW system should be realized in the first or in the second way (or in a combination of both) is user-specific

    Testing earthquake predictions

    Full text link
    Statistical tests of earthquake predictions require a null hypothesis to model occasional chance successes. To define and quantify `chance success' is knotty. Some null hypotheses ascribe chance to the Earth: Seismicity is modeled as random. The null distribution of the number of successful predictions -- or any other test statistic -- is taken to be its distribution when the fixed set of predictions is applied to random seismicity. Such tests tacitly assume that the predictions do not depend on the observed seismicity. Conditioning on the predictions in this way sets a low hurdle for statistical significance. Consider this scheme: When an earthquake of magnitude 5.5 or greater occurs anywhere in the world, predict that an earthquake at least as large will occur within 21 days and within an epicentral distance of 50 km. We apply this rule to the Harvard centroid-moment-tensor (CMT) catalog for 2000--2004 to generate a set of predictions. The null hypothesis is that earthquake times are exchangeable conditional on their magnitudes and locations and on the predictions--a common ``nonparametric'' assumption in the literature. We generate random seismicity by permuting the times of events in the CMT catalog. We consider an event successfully predicted only if (i) it is predicted and (ii) there is no larger event within 50 km in the previous 21 days. The PP-value for the observed success rate is <0.001<0.001: The method successfully predicts about 5% of earthquakes, far better than `chance,' because the predictor exploits the clustering of earthquakes -- occasional foreshocks -- which the null hypothesis lacks. Rather than condition on the predictions and use a stochastic model for seismicity, it is preferable to treat the observed seismicity as fixed, and to compare the success rate of the predictions to the success rate of simple-minded predictions like those just described. If the proffered predictions do no better than a simple scheme, they have little value.Comment: Published in at http://dx.doi.org/10.1214/193940307000000509 the IMS Collections (http://www.imstat.org/publications/imscollections.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Neural Network Algorithms for using Radon Emanations as an Earthquake Precursor

    Get PDF
    The investigation throughout the world in past two decades provides evidence which indicates that significance variation of radon and other soil gases may occur in association with major geophysical events such as earthquake events. The traditional statistical algorithm which included regression to remove the effect of the meteorological parameters from the as is measured radon along with additional variation that periodicity in seasonal variations is computed using Fast Fourier Transform has shown to improve reliability of prediction of earthquake The present paper deals with the use of neural network algorithms which can learn the behavior of radon with respect to known meteorological parameters. This method has potential of tracking 201C;changing patterns201D; in dependence of radon on meteorological parameters and it may adapt to such changes on its own in due course of time. Another neural network algorithm using Probabilistic Neural Networks that requires neither an explicit step of regression nor use of any specific period is also presented

    Earthquake Probability Assessment for the Indian Subcontinent Using Deep Learning.

    Full text link
    Earthquake prediction is a popular topic among earth scientists; however, this task is challenging and exhibits uncertainty therefore, probability assessment is indispensable in the current period. During the last decades, the volume of seismic data has increased exponentially, adding scalability issues to probability assessment models. Several machine learning methods, such as deep learning, have been applied to large-scale images, video, and text processing; however, they have been rarely utilized in earthquake probability assessment. Therefore, the present research leveraged advances in deep learning techniques to generate scalable earthquake probability mapping. To achieve this objective, this research used a convolutional neural network (CNN). Nine indicators, namely, proximity to faults, fault density, lithology with an amplification factor value, slope angle, elevation, magnitude density, epicenter density, distance from the epicenter, and peak ground acceleration (PGA) density, served as inputs. Meanwhile, 0 and 1 were used as outputs corresponding to non-earthquake and earthquake parameters, respectively. The proposed classification model was tested at the country level on datasets gathered to update the probability map for the Indian subcontinent using statistical measures, such as overall accuracy (OA), F1 score, recall, and precision. The OA values of the model based on the training and testing datasets were 96% and 92%, respectively. The proposed model also achieved precision, recall, and F1 score values of 0.88, 0.99, and 0.93, respectively, for the positive (earthquake) class based on the testing dataset. The model predicted two classes and observed very-high (712,375 km2) and high probability (591,240.5 km2) areas consisting of 19.8% and 16.43% of the abovementioned zones, respectively. Results indicated that the proposed model is superior to the traditional methods for earthquake probability assessment in terms of accuracy. Aside from facilitating the prediction of the pixel values for probability assessment, the proposed model can also help urban-planners and disaster managers make appropriate decisions regarding future plans and earthquake management
    • …
    corecore