16 research outputs found

    Highlights from the first ten years of the New Zealand earthquake forecast testing center

    Get PDF
    We present highlights from the first decade of operation of the New Zealand Earthquake Forecast Testing Center of the Collaboratory for the Study of Earthquake Predictability (CSEP). Most results are based on reprocessing using the best available catalog, because the testing center did not consistently capture the complete real-time catalog. Tests of models with daily updating show that aftershock models incorporating Omori- Utsu decay can outperform long-term smoothed seismicity models with probability gains up to 1000 during major aftershock sequences. Tests of models with 3-month updating show that several models with every earthquake a precursor according to scale (EEPAS) model, incorporating the precursory scale increase phenomenon and without Omori-Utsu decay, and the double-branching model, with both Omori-Utsu and exponential decay in time, outperformed a regularly updated smoothed seismicity model. In tests of 5-yr models over 10 yrs without updating, a smoothed seismicity model outperformed the earthquake source model of the New Zealand National Seismic Hazard Model. The performance of 3-month and 5-yr models was strongly affected by the Canterbury earthquake sequence, which occurred in a region of previously low seismicity. Smoothed seismicity models were shown to perform better with more frequent updating. CSEP models were a useful resource for the development of hybrid time-varying models for practical forecasting after major earthquakes in the Canterbury and Kaikoura regions. © 2018 Seismological Society of America. All rights reserved

    The Forecasting Skill of Physics‐Based Seismicity Models during the 2010–2012 Canterbury, New Zealand, Earthquake Sequence

    Get PDF
    The static coulomb stress hypothesis is a widely known physical mechanism for earthquake triggering and thus a prime candidate for physics-based operational earthquake forecasting (OEF). However, the forecast skill of coulomb-based seismicity models remains controversial, especially compared with empirical statistical models. A previous evaluation by the Collaboratory for the Study of Earthquake Predictability (CSEP) concluded that a suite of coulomb-based seismicity models were less informative than empirical models during the aftershock sequence of the 1992 Mw 7.3 Landers, California, earthquake. Recently, a new generation of coulomb-based and coulomb/statistical hybrid models were developed that account better for uncertainties and secondary stress sources. Here, we report on the performance of this new suite of models compared with empirical epidemic-type aftershock sequence (ETAS) models during the 2010-2012 Canterbury, New Zealand, earthquake sequence. Comprising the 2010 M 7.1 Darfield earthquake and three subsequent M = 5:9 shocks (including the February 2011 Christchurch earthquake), this sequence provides a wealth of data (394 M = 3:95 shocks). We assessed models over multiple forecast horizons (1 day, 1 month, and 1 yr, updated after M = 5:9 shocks). The results demonstrate substantial improvements in the coulomb-based models. Purely physics-based models have a performance comparable to the ETAS model, and the two coulomb/statistical hybrids perform better or similar to the corresponding statistical model. On the other hand, an ETAS model with anisotropic (fault-based) aftershock zones is just as informative. These results provide encouraging evidence for the predictive power of coulomb-based models. To assist with model development, we identify discrepancies between forecasts and observations. © 2018 Seismological Society of America. All rights reserved

    The Collaboratory for the Study of Earthquake Predictability:Achievements and Priorities

    Get PDF
    The Collaboratory for the Study of Earthquake Predictability (CSEP) is a global cyberinfrastructure for prospective evaluations of earthquake forecast models and prediction algorithms. CSEP’s goals are to improve our understanding of earthquake predictability, advance forecasting model development, test key scientific hypotheses and their predictive power, and improve seismic hazard assessments. Since its inception in California in 2007, the global CSEP collaboration has been conducting forecast experiments in a variety of tectonic settings and at a global scale and now operates four testing centers on four continents to automatically and objectively evaluate models against prospective data. These experiments have provided a multitude of results that are informing operational earthquake forecasting systems and seismic hazard models, and they have provided new and, sometimes, surprising insights into the predictability of earthquakes and spurned model improvements. CSEP has also conducted pilot studies to evaluate ground-motion and hazard models. Here, we report on selected achievements from a decade of CSEP, and we present our priorities for future activities.Published1305-13136T. Studi di pericolosità sismica e da maremotoJCR Journa

    How Useful Are Strain Rates for Estimating the Long-Term Spatial Distribution of Earthquakes?

    No full text
    Strain rates have been included in multiplicative hybrid modelling of the long-term spatial distribution of earthquakes in New Zealand (NZ) since 2017. Previous modelling has shown a strain rate model to be the most informative input to explain earthquake locations over a fitting period from 1987 to 2006 and a testing period from 2012 to 2015. In the present study, three different shear strain rate models have been included separately as covariates in NZ multiplicative hybrid models, along with other covariates based on known fault locations, their associated slip rates, and proximity to the plate interface. Although the strain rate models differ in their details, there are similarities in their contributions to the performance of hybrid models in terms of information gain per earthquake (IGPE). The inclusion of each strain rate model improves the performance of hybrid models during the previously adopted fitting and testing periods. However, the hybrid models, including strain rates, perform poorly in a reverse testing period from 1951 to 1986. Molchan error diagrams show that the correlations of the strain rate models with earthquake locations are lower over the reverse testing period than from 1987 onwards. Smoothed scatter plots of the strain rate covariates associated with target earthquakes versus time confirm the relatively low correlations before 1987. Moreover, these analyses show that other covariates of the multiplicative models, such as proximity to the plate interface and proximity to mapped faults, were better correlated with earthquake locations prior to 1987. These results suggest that strain rate models based on only a few decades of available geodetic data from a limited network of GNSS stations may not be good indicators of where earthquakes occur over a long time frame

    1.1 Formulation Of The Problem In

    No full text
    We present statistical and interval techniques for evaluating the uncertainties associated with geophysical tomographic inversion problems, including estimation of data errors, model errors, and total solution uncertainties. These techniques are applied to the inversion of traveltime data collected in a cross well seismic experiment. The inversion method uses the conjugate gradient technique, incorporating expert knowledge of data and model uncertainty to stabilize the solution. The technique produced smaller uncertainty than previous tomographic inversion of the data

    Regional Earthquake Likelihood Models I: First-Order Results

    No full text
    The Regional Earthquake Likelihood Models (RELM) working group designed a 5-year experiment to forecast the number, spatial distribution, and magnitude distribution of subsequent target earthquakes, defined to be those with magnitude &gt;= 4.95 (M4.95+) in a well-defined California testing region. Included in the experiment specification were the description of the data source, the methods for data processing, and the proposed evaluation metrics. The RELM experiment began on 1 January 2006 and involved 17 time-invariant forecasts constructed by seismicity modelers; by the end of the experiment on 1 January 2011, 31 target earthquakes had occurred. We analyze the experiment outcome by applying the proposed consistency tests based on likelihood measures and additional comparison tests based on a measure of information gain. We find that the smoothed seismicity forecast by Helmstetter et al., 2007 based on M2+ earthquakes since 1981, is the best forecast, regardless of whether aftershocks are included in the analysis. The RELM experiment has helped to clarify ideas about testing that can be applied to more wide-ranging earthquake forecasting experiments conducted by the Collaboratory for the Study of Earthquake Predictability (CSEP).</p
    corecore