7,332 research outputs found

    Earthquake forecasting and its verification

    Get PDF
    No proven method is currently available for the reliable short time prediction of earthquakes (minutes to months). However, it is possible to make probabilistic hazard assessments for earthquake risk. These are primarily based on the association of small earthquakes with future large earthquakes. In this paper we discuss a new approach to earthquake forecasting. This approach is based on a pattern informatics (PI) method which quantifies temporal variations in seismicity. The output is a map of areas in a seismogenic region (``hotspots'') where earthquakes are forecast to occur in a future 10-year time span. This approach has been successfully applied to California, to Japan, and on a worldwide basis. These forecasts are binary--an earthquake is forecast either to occur or to not occur. The standard approach to the evaluation of a binary forecast is the use of the relative operating characteristic (ROC) diagram, which is a more restrictive test and less subject to bias than maximum likelihood tests. To test our PI method, we made two types of retrospective forecasts for California. The first is the PI method and the second is a relative intensity (RI) forecast based on the hypothesis that future earthquakes will occur where earthquakes have occurred in the recent past. While both retrospective forecasts are for the ten year period 1 January 2000 to 31 December 2009, we performed an interim analysis 5 years into the forecast. The PI method out performs the RI method under most circumstances.Comment: 10(+1) pages, 5 figures, 2 tables. Submitted to Nonlinearl Processes in Geophysics on 5 August 200

    Earthquake forecasting and its verification

    Get PDF
    No proven method is currently available for the reliable short time prediction of earthquakes (minutes to months). However, it is possible to make probabilistic hazard assessments for earthquake risk. In this paper we discuss a new approach to earthquake forecasting based on a pattern informatics (PI) method which quantifies temporal variations in seismicity. The output, which is based on an association of small earthquakes with future large earthquakes, is a map of areas in a seismogenic region ('hotspots'') where earthquakes are forecast to occur in a future 10-year time span. This approach has been successfully applied to California, to Japan, and on a worldwide basis. Because a sharp decision threshold is used, these forecasts are binary--an earthquake is forecast either to occur or to not occur. The standard approach to the evaluation of a binary forecast is the use of the relative (or receiver) operating characteristic (ROC) diagram, which is a more restrictive test and less subject to bias than maximum likelihood tests. To test our PI method, we made two types of retrospective forecasts for California. The first is the PI method and the second is a relative intensity (RI) forecast based on the hypothesis that future large earthquakes will occur where most smaller earthquakes have occurred in the recent past. While both retrospective forecasts are for the ten year period 1 January 2000 to 31 December 2009, we performed an interim analysis 5 years into the forecast. The PI method out performs the RI method under most circumstances

    Modification of the pattern informatics method for forecasting large earthquake events using complex eigenvectors

    Full text link
    Recent studies have shown that real-valued principal component analysis can be applied to earthquake fault systems for forecasting and prediction. In addition, theoretical analysis indicates that earthquake stresses may obey a wave-like equation, having solutions with inverse frequencies for a given fault similar to those that characterize the time intervals between the largest events on the fault. It is therefore desirable to apply complex principal component analysis to develop earthquake forecast algorithms. In this paper we modify the Pattern Informatics method of earthquake forecasting to take advantage of the wave-like properties of seismic stresses and utilize the Hilbert transform to create complex eigenvectors out of measured time series. We show that Pattern Informatics analyses using complex eigenvectors create short-term forecast hot-spot maps that differ from hot-spot maps created using only real-valued data and suggest methods of analyzing the differences and calculating the information gain.Comment: 13 pages, 1 figure. Submitted to Tectonophysics on 30 August 200

    Global Seismic Nowcasting With Shannon Information Entropy.

    Get PDF
    Seismic nowcasting uses counts of small earthquakes as proxy data to estimate the current dynamical state of an earthquake fault system. The result is an earthquake potential score that characterizes the current state of progress of a defined geographic region through its nominal earthquake "cycle." The count of small earthquakes since the last large earthquake is the natural time that has elapsed since the last large earthquake (Varotsos et al., 2006, https://doi.org/10.1103/PhysRevE.74.021123). In addition to natural time, earthquake sequences can also be analyzed using Shannon information entropy ("information"), an idea that was pioneered by Shannon (1948, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x). As a first step to add seismic information entropy into the nowcasting method, we incorporate magnitude information into the natural time counts by using event self-information. We find in this first application of seismic information entropy that the earthquake potential score values are similar to the values using only natural time. However, other characteristics of earthquake sequences, including the interevent time intervals, or the departure of higher magnitude events from the magnitude-frequency scaling line, may contain additional information

    The Theory of Earthquakes in Signalling Severe Political Events

    Get PDF
    This research seeks to conceptualise the use of an earthquake forecasting theory to signal severe political risks such as wars, coups d’état, demonstrations and revolutions. The justification for linking the theoretical framework of an earthquake with severe political risks is twofold. Firstly, it is generally random in its nature; however, there are some patterns which can help in predicting the occurrence of future earthquakes. Secondly, an earthquake is usually region-specific, i.e. there are geographical regions which are prone to earthquakes more than other locations, and there are regions where the odds of an earthquake occurrence are minimal; however, under certain circumstances there is always a negligible possibility of such an event occurring. Severe political events are similar in their nature as they are also location-specific and random in their occurrence. In order to establish the link between these two phenomena, a clearer definition of these two variables will need to be established. Thus this theoretical research will first define the nature of severe political risks in globalised world followed by definition of an earthquake and its nature. Once a clear definition of these two variables has been established, the discussion will move towards discussion of various models for signalling severe political risks and earthquakes. It will conclude by suggesting a new approach to signalling the possibility of an occurrence of severe political events based on various assessment models and methods employed in forecasting an occurrence of an earthquake

    The occupation of a box as a toy model for the seismic cycle of a fault

    Full text link
    We illustrate how a simple statistical model can describe the quasiperiodic occurrence of large earthquakes. The model idealizes the loading of elastic energy in a seismic fault by the stochastic filling of a box. The emptying of the box after it is full is analogous to the generation of a large earthquake in which the fault relaxes after having been loaded to its failure threshold. The duration of the filling process is analogous to the seismic cycle, the time interval between two successive large earthquakes in a particular fault. The simplicity of the model enables us to derive the statistical distribution of its seismic cycle. We use this distribution to fit the series of earthquakes with magnitude around 6 that occurred at the Parkfield segment of the San Andreas fault in California. Using this fit, we estimate the probability of the next large earthquake at Parkfield and devise a simple forecasting strategy.Comment: Final version of the published paper, with an erratum and an unpublished appendix with some proof

    A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model

    Full text link
    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized with the first one. We use these partially synchronized models to successfully forecast most of the largest earthquakes generated by the first model. This forecasting strategy outperforms others that only take into account the earthquake series. Our results suggest that probably a good way to synchronize more detailed models with real faults is to force them to reproduce the sequence of previous earthquake ruptures on the faults. This hypothesis could be tested in the future with more detailed models and actual seismic data.Comment: Revised version. Recommended for publication in Tectonophysic
    corecore