15 research outputs found

    Automated Eruption Forecasting at Frequently Active Volcanoes Using Bayesian Networks Learned From Monitoring Data and Expert Elicitation: Application to Mt Ruapehu, Aotearoa, New Zealand

    Get PDF
    Volcano observatory best practice recommends using probabilistic methods to forecast eruptions to account for the complex natural processes leading up to an eruption and communicating the inherent uncertainties in appropriate ways. Bayesian networks (BNs) are an artificial intelligence technology to model complex systems with uncertainties. BNs consist of a graphical presentation of the system that is being modelled and robust statistics to describe the joint probability distribution of all variables. They have been applied successfully in many domains including risk assessment to support decision-making and modelling multiple data streams for eruption forecasting and volcanic hazard and risk assessment. However, they are not routinely or widely employed in volcano observatories yet. BNs provide a flexible framework to incorporate conceptual understanding of a volcano, learn from data when available and incorporate expert elicitation in the absence of data. Here we describe a method to build a BN model to support decision-making. The method is built on the process flow of risk management by the International Organization for Standardization. We have applied the method to develop a BN model to forecast the probability of eruption for Mt Ruapehu, Aotearoa New Zealand in collaboration with the New Zealand volcano monitoring group (VMG). Since 2014, the VMG has regularly estimated the probability of volcanic eruptions at Mt Ruapehu that impact beyond the crater rim. The BN model structure was built with expert elicitation based on the conceptual understanding of Mt Ruapehu and with a focus on making use of the long eruption catalogue and the long-term monitoring data. The model parameterisation was partly done by data learning, complemented by expert elicitation. The retrospective BN model forecasts agree well with the VMG elicitations. The BN model is now implemented as a software tool to automatically calculate daily forecast updates

    Bayesian Network Modeling and Expert Elicitation for Probabilistic Eruption Forecasting: Pilot Study for Whakaari/White Island, New Zealand

    Get PDF
    Bayesian Networks (BNs) are probabilistic graphical models that provide a robust and flexible framework for understanding complex systems. Limited case studies have demonstrated the potential of BNs in modeling multiple data streams for eruption forecasting and volcanic hazard assessment. Nevertheless, BNs are not widely employed in volcano observatories. Motivated by their need to determine eruption-related fieldwork risks, we have worked closely with the New Zealand volcano monitoring team to appraise BNs for eruption forecasting with the purpose, at this stage, of assessing the utility of the concept rather than develop a full operational framework. We adapted a previously published BN for a pilot study to forecast volcanic eruption on Whakaari/White Island. Developing the model structure provided a useful framework for the members of the volcano monitoring team to share their knowledge and interpretation of the volcanic system. We aimed to capture the conceptual understanding of the volcanic processes and represent all observables that are regularly monitored. The pilot model has a total of 30 variables, four of them describing the volcanic processes that can lead to three different types of eruptions: phreatic, magmatic explosive and magmatic effusive. The remaining 23 variables are grouped into observations related to seismicity, fluid geochemistry and surface manifestations. To estimate the model parameters, we held a workshop with 11 experts, including two from outside the monitoring team. To reduce the number of conditional probabilities that the experts needed to estimate, each variable is described by only two states. However, experts were concerned about this limitation, in particular for continuous data. Therefore, they were reluctant to define thresholds to distinguish between states. We conclude that volcano monitoring requires BN modeling techniques that can accommodate continuous variables. More work is required to link unobservable (latent) processes with observables and with eruptive patterns, and to model dynamic processes. A provisional application of the pilot model revealed several key insights. Refining the BN modeling techniques will help advance understanding of volcanoes and improve capabilities for forecasting volcanic eruptions. We consider that BNs will become essential for handling ever-burgeoning observations, and for assessing data's evidential meaning for operational eruption forecasting

    Highlights from the first ten years of the New Zealand earthquake forecast testing center

    Get PDF
    We present highlights from the first decade of operation of the New Zealand Earthquake Forecast Testing Center of the Collaboratory for the Study of Earthquake Predictability (CSEP). Most results are based on reprocessing using the best available catalog, because the testing center did not consistently capture the complete real-time catalog. Tests of models with daily updating show that aftershock models incorporating Omori- Utsu decay can outperform long-term smoothed seismicity models with probability gains up to 1000 during major aftershock sequences. Tests of models with 3-month updating show that several models with every earthquake a precursor according to scale (EEPAS) model, incorporating the precursory scale increase phenomenon and without Omori-Utsu decay, and the double-branching model, with both Omori-Utsu and exponential decay in time, outperformed a regularly updated smoothed seismicity model. In tests of 5-yr models over 10 yrs without updating, a smoothed seismicity model outperformed the earthquake source model of the New Zealand National Seismic Hazard Model. The performance of 3-month and 5-yr models was strongly affected by the Canterbury earthquake sequence, which occurred in a region of previously low seismicity. Smoothed seismicity models were shown to perform better with more frequent updating. CSEP models were a useful resource for the development of hybrid time-varying models for practical forecasting after major earthquakes in the Canterbury and Kaikoura regions. © 2018 Seismological Society of America. All rights reserved

    The Forecasting Skill of Physics‐Based Seismicity Models during the 2010–2012 Canterbury, New Zealand, Earthquake Sequence

    Get PDF
    The static coulomb stress hypothesis is a widely known physical mechanism for earthquake triggering and thus a prime candidate for physics-based operational earthquake forecasting (OEF). However, the forecast skill of coulomb-based seismicity models remains controversial, especially compared with empirical statistical models. A previous evaluation by the Collaboratory for the Study of Earthquake Predictability (CSEP) concluded that a suite of coulomb-based seismicity models were less informative than empirical models during the aftershock sequence of the 1992 Mw 7.3 Landers, California, earthquake. Recently, a new generation of coulomb-based and coulomb/statistical hybrid models were developed that account better for uncertainties and secondary stress sources. Here, we report on the performance of this new suite of models compared with empirical epidemic-type aftershock sequence (ETAS) models during the 2010-2012 Canterbury, New Zealand, earthquake sequence. Comprising the 2010 M 7.1 Darfield earthquake and three subsequent M = 5:9 shocks (including the February 2011 Christchurch earthquake), this sequence provides a wealth of data (394 M = 3:95 shocks). We assessed models over multiple forecast horizons (1 day, 1 month, and 1 yr, updated after M = 5:9 shocks). The results demonstrate substantial improvements in the coulomb-based models. Purely physics-based models have a performance comparable to the ETAS model, and the two coulomb/statistical hybrids perform better or similar to the corresponding statistical model. On the other hand, an ETAS model with anisotropic (fault-based) aftershock zones is just as informative. These results provide encouraging evidence for the predictive power of coulomb-based models. To assist with model development, we identify discrepancies between forecasts and observations. © 2018 Seismological Society of America. All rights reserved

    A 20-Year Journey of Forecasting with the “Every Earthquake a Precursor According to Scale” Model

    No full text
    Nearly 20 years ago, the observation that major earthquakes are generally preceded by an increase in the seismicity rate on a timescale from months to decades was embedded in the “Every Earthquake a Precursor According to Scale” (EEPAS) model. EEPAS has since been successfully applied to regional real-world and synthetic earthquake catalogues to forecast future earthquake occurrence rates with time horizons up to a few decades. When combined with aftershock models, its forecasting performance is improved for short time horizons. As a result, EEPAS has been included as the medium-term component in public earthquake forecasts in New Zealand. EEPAS has been modified to advance its forecasting performance despite data limitations. One modification is to compensate for missing precursory earthquakes. Precursory earthquakes can be missing because of the time-lag between the end of a catalogue and the time at which a forecast applies or the limited lead time from the start of the catalogue to a target earthquake. An observed space-time trade-off in precursory seismicity, which affects the EEPAS scaling parameters for area and time, also can be used to improve forecasting performance. Systematic analysis of EEPAS performance on synthetic catalogues suggests that regional variations in EEPAS parameters can be explained by regional variations in the long-term earthquake rate. Integration of all these developments is needed to meet the challenge of producing a global EEPAS model

    Building self-consistent, short-term earthquake probability (STEP) models: improved strategies and calibration procedures

    No full text
    We present here two self-consistent implementations of a short-term earthquake probability (STEP) model that produces daily seismicity forecasts for the area of the Italian national seismic network. Both implementations combine a time-varying and a time-invariant contribution, for which we assume that the instrumental Italian earthquake catalog provides the best information. For the time-invariant contribution, the catalog is declustered using the clustering technique of the STEP model; the smoothed seismicity model is generated from the declustered catalog. The time-varying contribution is what distinguishes the two implementations: 1) for one implementation (STEP-LG), the original model parameterization and estimation is used; 2) for the other (STEP-NG), the mean abundance method is used to estimate aftershock productivity. In the STEP-NG implementation, earthquakes with magnitude up to ML= 6.2 are expected to be less productive compared to the STEP-LG implementation, whereas larger earthquakes are expected to be more productive. We have retrospectively tested the performance of these two implementations and applied likelihood tests to evaluate their consistencies with observed earthquakes. Both of these implementations were consistent with the observed earthquake data in space: STEP-NG performed better than STEP-LG in terms of forecast rates. More generally, we found that testing earthquake forecasts issued at regular intervals does not test the full power of clustering models, and future experiments should allow for more frequent forecasts starting at the times of triggering events

    The Effect of Catalogue Lead Time on Medium-Term Earthquake Forecasting with Application to New Zealand Data

    No full text
    ‘Every Earthquake a Precursor According to Scale’ (EEPAS) is a catalogue-based model to forecast earthquakes within the coming months, years and decades, depending on magnitude. EEPAS has been shown to perform well in seismically active regions like New Zealand (NZ). It is based on the observation that seismicity increases prior to major earthquakes. This increase follows predictive scaling relations. For larger target earthquakes, the precursor time is longer and precursory seismicity may have occurred prior to the start of the catalogue. Here, we derive a formula for the completeness of precursory earthquake contributions to a target earthquake as a function of its magnitude and lead time, where the lead time is the length of time from the start of the catalogue to its time of occurrence. We develop two new versions of EEPAS and apply them to NZ data. The Fixed Lead time EEPAS (FLEEPAS) model is used to examine the effect of the lead time on forecasting, and the Fixed Lead time Compensated EEPAS (FLCEEPAS) model compensates for incompleteness of precursory earthquake contributions. FLEEPAS reveals a space-time trade-off of precursory seismicity that requires further investigation. Both models improve forecasting performance at short lead times, although the improvement is achieved in different ways

    Setting up an earthquake forecast experiment in Italy

    Get PDF
    We describe here the setting up of the first earthquake forecasting experiment for Italy within the Collaboratory for the Study of Earthquake Predictability (CSEP). The CSEP conducts rigorous and actual prospective forecast experiments for different tectonic environments in several forecast-testing centers around the globe. These forecasts are issued for future periods, and are tested only against future observations, to avoid any possible bias. As such, the experiments need to be completely defined. This includes exact definitions of the testing area, of the learning data for the forecast models, and of the observation data against which the forecasts will be tested to evaluate their performance. We present the rules that were taken from the Regional Earthquake Likelihood Models experiments and extended and modified for the Italian experiment. We also present the characterizations of the learning and observational catalogs that describe the completeness of these catalogs, and reveal inhomogeneities in the magnitudes between these catalogs. A particular focus lies on the stability of the earthquake recordings of the observational network. These catalog investigations provide guidance for CSEP modelers for the development of earthquakes forecasts for submission to the forecast experiments in Italy
    corecore