353 research outputs found

    Reverse Detection of Short-Term Earthquake Precursors

    Full text link
    We introduce a new approach to short-term earthquake prediction based on the concept of selforganization of seismically active fault networks. That approach is named "Reverse Detection of Precursors" (RDP), since it considers precursors in reverse order of their appearance. This makes it possible to detect precursors undetectable by direct analysis. Possible mechanisms underlying RDP are outlined. RDP is described with a concrete example: we consider as short-term precursors the newly introduced chains of earthquakes reflecting the rise of an earthquake correlation range; and detect (retrospectively) such chains a few months before two prominent Californian earthquakes - Landers, 1992, M = 7.6, and Hector Mine, 1999, M = 7.3, with one false alarm. Similar results (described elsewhere) are obtained by RDP for 21 more strong earthquakes in California (M >= 6.4), Japan (M >= 7.0) and the Eastern Mediterranean (M >= 6.5). Validation of the RDP approach requires, as always, prediction in advance for which this study sets up a base. We have the first case of advance prediction; it was reported before Tokachi-oki earthquake (near Hokkaido island, Japan), Sept. 25, 2003, M = 8.1. RDP has potentially important applications to other precursors and to prediction of other critical phenomena besides earthquakes. In particular, it might vindicate some short-term precursors, previously rejected as giving too many false alarms.Comment: 17 pages, 5 figure

    Predictability of extreme events in a branching diffusion model

    Full text link
    We propose a framework for studying predictability of extreme events in complex systems. Major conceptual elements -- hierarchical structure, spatial dynamics, and external driving -- are combined in a classical branching diffusion with immigration. New elements -- observation space and observed events -- are introduced in order to formulate a prediction problem patterned after the geophysical and environmental applications. The problem consists of estimating the likelihood of occurrence of an extreme event given the observations of smaller events while the complete internal dynamics of the system is unknown. We look for premonitory patterns that emerge as an extreme event approaches; those patterns are deviations from the long-term system's averages. We have found a single control parameter that governs multiple spatio-temporal premonitory patterns. For that purpose, we derive i) complete analytic description of time- and space-dependent size distribution of particles generated by a single immigrant; ii) the steady-state moments that correspond to multiple immigrants; and iii) size- and space-based asymptotic for the particle size distribution. Our results suggest a mechanism for universal premonitory patterns and provide a natural framework for their theoretical and empirical study

    Predictability in the ETAS Model of Interacting Triggered Seismicity

    Full text link
    As part of an effort to develop a systematic methodology for earthquake forecasting, we use a simple model of seismicity based on interacting events which may trigger a cascade of earthquakes, known as the Epidemic-Type Aftershock Sequence model (ETAS). The ETAS model is constructed on a bare (unrenormalized) Omori law, the Gutenberg-Richter law and the idea that large events trigger more numerous aftershocks. For simplicity, we do not use the information on the spatial location of earthquakes and work only in the time domain. We offer an analytical approach to account for the yet unobserved triggered seismicity adapted to the problem of forecasting future seismic rates at varying horizons from the present. Tests presented on synthetic catalogs validate strongly the importance of taking into account all the cascades of still unobserved triggered events in order to predict correctly the future level of seismicity beyond a few minutes. We find a strong predictability if one accepts to predict only a small fraction of the large-magnitude targets. However, the probability gains degrade fast when one attempts to predict a larger fraction of the targets. This is because a significant fraction of events remain uncorrelated from past seismicity. This delineates the fundamental limits underlying forecasting skills, stemming from an intrinsic stochastic component in these interacting triggered seismicity models.Comment: Latex file of 20 pages + 15 eps figures + 2 tables, in press in J. Geophys. Re

    Gambling scores in earthquake prediction analysis

    Full text link
    The number of successes 'n' and the normalized measure of space-time alarm 'tau' are commonly used to characterize the strength of an earthquake prediction method and the significance of prediction results. To evaluate better the forecaster's skill, it has been recently suggested to use a new characteristic, the gambling score R, which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand the class of R-characteristics and apply these to the analysis of results of the M8 prediction algorithm. We show that the level of significance 'alfa' strongly depends (1) on the choice of weighting alarm parameters, (2) on the partitioning of the entire alarm volume into component parts, and (3) on the accuracy of the spatial rate of target events, m(dg). These tools are at the disposal of the researcher and can affect the significance estimate in either direction. All the R-statistics discussed here corroborate that the prediction of 8.0<=M<8.5 events by the M8 method is nontrivial. However, conclusions based on traditional characteristics (n,tau) are more reliable owing to two circumstances: 'tau' is stable since it is based on relative values of m(.), and the 'n' statistic enables constructing an upper estimate of 'alfa' taking into account the uncertainty of m(.).Comment: 17 pages, 3 fugure

    Predicting Failure using Conditioning on Damage History: Demonstration on Percolation and Hierarchical Fiber Bundles

    Full text link
    We formulate the problem of probabilistic predictions of global failure in the simplest possible model based on site percolation and on one of the simplest model of time-dependent rupture, a hierarchical fiber bundle model. We show that conditioning the predictions on the knowledge of the current degree of damage (occupancy density pp or number and size of cracks) and on some information on the largest cluster improves significantly the prediction accuracy, in particular by allowing to identify those realizations which have anomalously low or large clusters (cracks). We quantify the prediction gains using two measures, the relative specific information gain (which is the variation of entropy obtained by adding new information) and the root-mean-square of the prediction errors over a large ensemble of realizations. The bulk of our simulations have been obtained with the two-dimensional site percolation model on a lattice of size L×L=20×20L \times L=20 \times 20 and hold true for other lattice sizes. For the hierarchical fiber bundle model, conditioning the measures of damage on the information of the location and size of the largest crack extends significantly the critical region and the prediction skills. These examples illustrate how on-going damage can be used as a revelation of both the realization-dependent pre-existing heterogeneity and the damage scenario undertaken by each specific sample.Comment: 7 pages + 11 figure

    Long-term premonitory seismicity patterns in Tibet and the Himalayas

    Get PDF
    An attempt is made to identify seismicity patterns precursory to great earthquakes in most of Tibet as well as the central and eastern Himalayas. The region has considerable tectonic homogeneity and encompasses parts of China, India, Nepal, Bhutan, Bangladesh, and Burma. Two seismicity patterns previously described were used: (1) pattern Σ is a peak in the sum of earthquake energies raised to the power of about 2/3, taken over a sliding time window and within a magnitude range less than that of events we are trying to predict; and (2) pattern S (swarms) consists of the spatial clustering of earthquakes during a time interval when the seismicity is above average. Within the test region, distinct peaks in pattern Σ have occurred twice during the 78‐year‐long test period: in 1948–49, prior to the great 1950 Assam‐Tibet earthquake (M = 8.6), and in 1976. Peaks in pattern S have occurred three times; in 1932–1933, prior to the great 1934 Bihar‐Nepal earthquake (M = 8.3), in 1946, and in 1978. The 1934 and 1950 earthquakes were the only events in the region that exceeded M = 8.0 during the test period. On the basis of experience here and elsewhere, the current peaks in both Σ and S suggest the likelihood of an M = 8.0 event within 6 years or an M = 8.5 event within 14 years. Such a prognostication should be viewed more as an experimental long‐term enhancement of the probability that a large earthquake will occur than as an actual prediction, in view of the exceedingly large area encompassed and the very lengthy time window. Furthermore, the chances of a randomly occurring event as large as M = 8.0 in the region are perhaps 21% within the next 6 years, and the present state of the art is such that we can place only limited confidence in such forecasts. The primary impact of the study, in our opinion, should be to stimulate the search for medium‐ and short‐term precursors in the region and to search for similar long‐term precursors elsewhere

    Prediction of Large Events on a Dynamical Model of a Fault

    Full text link
    We present results for long term and intermediate term prediction algorithms applied to a simple mechanical model of a fault. We use long term prediction methods based, for example, on the distribution of repeat times between large events to establish a benchmark for predictability in the model. In comparison, intermediate term prediction techniques, analogous to the pattern recognition algorithms CN and M8 introduced and studied by Keilis-Borok et al., are more effective at predicting coming large events. We consider the implications of several different quality functions Q which can be used to optimize the algorithms with respect to features such as space, time, and magnitude windows, and find that our results are not overly sensitive to variations in these algorithm parameters. We also study the intrinsic uncertainties associated with seismicity catalogs of restricted lengths.Comment: 33 pages, plain.tex with special macros include
    • 

    corecore