10 research outputs found

    On the Testing of Seismicity Models

    Full text link
    Recently a likelihood-based methodology has been developed by the Collaboratory for the Study of Earthquake Predictability (CSEP) with a view to testing and ranking seismicity models. We analyze this approach from the standpoint of possible applications to hazard analysis. We arrive at the conclusion that model testing can be made more efficient by focusing on some integral characteristics of the seismicity distribution. This is achieved either in the likelihood framework but with economical and physically reasonable coarsening of the phase space or by choosing a suitable measure of closeness between empirical and model seismicity rate in this space.Comment: To appear at Acta Geophysic

    Recent efforts in empirical ground motion modelling in New Zealand

    No full text
    This presentation discusses recent empirical ground motion modelling efforts in New Zealand. Firstly, the active shallow crustal and subduction interface and slab ground motion prediction equations (GMPEs) which are employed in the 2010 update of the national seismic hazard model (NSHM) are discussed. Other NZ-specific GMPEs developed, but not incorporated in the 2010 update are then discussed, in particular, the active shallow crustal model of Bradley (2010). A brief comparison of the NZ-specific GMPEs with the near-source ground motions recorded in the Canterbury earthquakes is then presented, given that these recordings collectively provide a significant increase in observed strong motions in the NZ catalogue. The ground motion prediction expert elicitation process that was undertaken following the Canterbury earthquakes for active shallow crustal earthquakes is then discussed. Finally, ongoing GMPE-related activities are discussed including: ground motion and metadata database refinement, improved site characterization of strong motion station, and predictions for subduction zone earthquakes

    Rethinking PSHA

    Get PDF
    Since the early 1980s seismic hazard assessment in New Zealand has been based on Probabilistic Seismic Hazard Analysis (PSHA). The most recent version of the New Zealand National Seismic Hazard Model, a PSHA model, was published by Stirling et al, in 2012. This model follows standard PSHA principals and combines a nation-wide model of active faults with a gridded point-source model based on the earthquake catalogue since 1840. These models are coupled with the ground-motion prediction equation of McVerry et al (2006). Additionally, we have developed a time-dependent clustering-based PSHA model for the Canterbury region (Gerstenberger et al, 2014) in response to the Canterbury earthquake sequence. We are now in the process of revising that national model. In this process we are investigating several of the fundamental assumptions in traditional PSHA and in how we modelled hazard in the past. For this project, we have three main focuses: 1) how do we design an optimal combination of multiple sources of information to produce the best forecast of earthquake rates in the next 50 years: can we improve upon a simple hybrid of fault sources and background sources, and can we better handle the uncertainties in the data and models (e.g., fault segmentation, frequency-magnitude distributions, time-dependence & clustering, low strain-rate areas, and subduction zone modelling)? 2) developing revised and new ground-motion predictions models including better capturing of epistemic uncertainty – a key focus in this work is developing a new strong ground motion catalogue for model development; and 3) how can we best quantify if changes we have made in our modelling are truly improvements? Throughout this process we are working toward incorporating numerical modelling results from physics based synthetic seismicity and ground-motion models

    First Results of the Regional Earthquake Likelihood Models Experiment

    No full text
    The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.ISSN:0033-4553ISSN:1420-9136ISSN:1557-736

    Literaturverzeichnis

    No full text
    corecore