57 research outputs found

    Utsu aftershock productivity law explained from geometric operations on the permanent static stress field of mainshocks

    Full text link
    The aftershock productivity law, first described by Utsu in 1970, is an exponential function of the form K=K0.exp({\alpha}M) where K is the number of aftershocks, M the mainshock magnitude, and {\alpha} the productivity parameter. The Utsu law remains empirical in nature although it has also been retrieved in static stress simulations. Here, we explain this law based on Solid Seismicity, a geometrical theory of seismicity where seismicity patterns are described by mathematical expressions obtained from geometric operations on a permanent static stress field. We recover the exponential form but with a break in scaling predicted between small and large magnitudes M, with {\alpha}=1.5ln(10) and ln(10), respectively, in agreement with results from previous static stress simulations. We suggest that the lack of break in scaling observed in seismicity catalogues (with {\alpha}=ln(10)) could be an artefact from existing aftershock selection methods, which assume a continuous behavior over the full magnitude range. While the possibility for such an artefact is verified in simulations, the existence of the theoretical kink remains to be proven.Comment: 18 pages, 4 figures (low resolution

    STREST – Exploitation plan

    Get PDF
    The present deliverable contains the detailed dissemination and exploitation plan of the project results, with particular emphasis on communicating to stakeholders and user communities addressing the outcomes of STREST on the enhancement of societal resilience through infrastructure stress tests. This deliverable presents the objectives of the dissemination activities, the identification of stakeholders and the detailed description of tasks concerning the use and dissemination of the STREST project foreground.JRC.G.4-European laboratory for structural assessmen

    Optimization of a large-scale microseismic monitoring network in northern Switzerland

    Get PDF
    We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc= 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to: calculate traveltimes of seismic body waves using a finite difference ray tracer and the 3-D velocity model of Switzerland, calculate seismic body-wave amplitudes at arbitrary stations assuming the Brune source model and using scaling and attenuation relations recently derived for Switzerland, and estimate the noise level at arbitrary locations within Switzerland using a first-order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan etal. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considere

    Risk assessment of Tunguska-type airbursts

    Get PDF
    Abstract:: The Tunguska airburst, which devastated a taiga forest over an area greater than 2,000km2 in a remote region of Central Siberia in 1908, is a classic example of extraterrestrial encounter discussed in the asteroid/comet impact hazard and risk assessment literature (e.g. Longo 2007; Carusi et al. 2007). Although it is generally agreed that the cosmic body caused damage by bursting in the air rather than through direct impact on the Earth's surface, the Tunguska event is often referred to as an impact event. To the best of our knowledge, no detailed studies have been performed to quantify the risk of a similar-sized event over a populated region. We propose here a straightforward probabilistic risk model for Tunguska-type events over the continental United States and use established risk metrics to determine the property (buildings and contents) and human losses. We find an annual average property loss of ~USD 200,000/year, a rate of ~0.3 fatalities/year and ~1.0 injuries/year ranging from a factor 3 below and to a factor 3 above the indicated values when a reasonable rate uncertainty for Tunguska-type events is taken into account. We then illustrate the case of an extreme event over the New York metropolitan area. While we estimate that this "nightmare” scenario would lead to ~USD 1.5trillion of property loss, ~3.9millions of fatalities and ~4.7millions of injuries, such event is almost impossible (occurrence once every ~30million years) and should only be considered as an illustrative exampl

    Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes. Geophys. Res

    Get PDF
    [1] The Non-Critical Precursory Accelerating Seismicity Theory (PAST) has been proposed recently to explain the formation of accelerating seismicity (increase of the a-value) observed before large earthquakes. In particular, it predicts that precursory accelerating seismicity should occur in the same spatiotemporal window as quiescence. In this first combined study we start by determining the spatiotemporal extent of quiescence observed prior to the 1997 Mw = 6 Umbria-Marche earthquake, Italy, using the RTL (Region-Time-Length) algorithm. We then show that background events located in that spatiotemporal window form a clear acceleration, as expected by the Non-Critical PAST. This result is a step forward in the understanding of precursory seismicity by relating two of the principal patterns that can precede large earthquakes. Citation: Mignan, A., and R. Di Giovambattista (2008), Relationship between accelerating seismicity and quiescence, two precursors to large earthquakes, Geophys. Res. Lett., 35, L15306

    The quantification of low-probability-high-consequences events: part I. A generic multi-risk approach

    Get PDF
    Dynamic risk processes, which involve interactions at the hazard and risk levels, have yet to be clearly understood and properly integrated into probabilistic risk assessment. While much attention has been given to this aspect lately, most studies remain limited to a small number of site-specific multi-risk scenarios. We present a generic probabilistic framework based on the sequential Monte Carlo Method to implement coinciding events and triggered chains of events (using a variant of a Markov chain), as well as time-variant vulnerability and exposure. We consider generic perils based on analogies with real ones, natural and man-made. Each simulated time series corresponds to one risk scenario, and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths, including extremes or low-probability-high-consequences chains of events. We find that extreme events can be captured by adding more knowledge on potential interaction processes using in a brick-by-brick approach. We introduce the concept of risk migration matrix to evaluate how multi-risk participates to the emergence of extremes, and we show that risk migration (i.e., clustering of losses) and risk amplification (i.e., loss amplification at higher losses) are the two main causes for their occurrence
    corecore