547 research outputs found

    Software infrastructure for solving non-linear partial differential equations and its application to modelling crustal fault systems

    Get PDF
    In this paper we will give a brief introduction into the Python-based modelling language escript. We will present a model for the dynamics of fault systems in the Earth's crust and then show how escript is used to implement solution algorithms for a dynamic as well as a quasi-static scenario

    Multicycle dynamics of fault systems and static and dynamic triggering of earthquakes

    Get PDF
    Dynamic simulations of rupture propagation and multiple earthquake cycles for varying fault geometries are presented. We investigate the role of both dynamic and static stress changes on earthquake triggering. Dynamic stress triggering of earthquakes is caused by the passage of seismic waves, whereas static stress triggering is due to net slippage on a fault resulting from an earthquake. Static stress changes represented by a Coulomb failure function and its relationship to seismicity rate change is a relatively well-known mechanism, whereas the physical origin of dynamic triggering remains one of the least understood aspects of earthquake nucleation. We investigate these mechanisms by analysing seismicity patterns with varying fault separation, geometry and with and without dynamic triggering present

    Effect of rolling on dissipation in fault gouges

    Get PDF
    Sliding and rolling are two outstanding deformation modes in granular media. The first one induces frictional dissipation whereas the latter one involves deformation with negligible resistance. Using numerical simulations on two-dimensional shear cells, we investigate the effect of the grain rotation on the energy dissipation and the strength of granular materials under quasistatic shear deformation. Rolling and sliding are quantified in terms of the so-called Cosserat rotations. The observed spontaneous formation of vorticity cells and clusters of rotating bearings may provide an explanation for the long standing heat flow paradox of earthquake dynamics

    Segmentation of Fault Networks Determined from Spatial Clustering of Earthquakes

    Full text link
    We present a new method of data clustering applied to earthquake catalogs, with the goal of reconstructing the seismically active part of fault networks. We first use an original method to separate clustered events from uncorrelated seismicity using the distribution of volumes of tetrahedra defined by closest neighbor events in the original and randomized seismic catalogs. The spatial disorder of the complex geometry of fault networks is then taken into account by defining faults as probabilistic anisotropic kernels, whose structures are motivated by properties of discontinuous tectonic deformation and previous empirical observations of the geometry of faults and of earthquake clusters at many spatial and temporal scales. Combining this a priori knowledge with information theoretical arguments, we propose the Gaussian mixture approach implemented in an Expectation-Maximization (EM) procedure. A cross-validation scheme is then used and allows the determination of the number of kernels that should be used to provide an optimal data clustering of the catalog. This three-steps approach is applied to a high quality relocated catalog of the seismicity following the 1986 Mount Lewis (Ml=5.7M_l=5.7) event in California and reveals that events cluster along planar patches of about 2 km2^2, i.e. comparable to the size of the main event. The finite thickness of those clusters (about 290 m) suggests that events do not occur on well-defined euclidean fault core surfaces, but rather that the damage zone surrounding faults may be seismically active at depth. Finally, we propose a connection between our methodology and multi-scale spatial analysis, based on the derivation of spatial fractal dimension of about 1.8 for the set of hypocenters in the Mnt Lewis area, consistent with recent observations on relocated catalogs

    Comparing open-source DEM frameworks for simulations of common bulk processes

    Get PDF
    Multiple software frameworks based on the Discrete Element Method (DEM) are available for simulating granular materials. All of them employ the same principles of explicit time integration, with each time step consisting of three main steps: contact detection, calculation of interactions, and integration of the equations of motion. However, there exist significant algorithmic differences, such as the choice of contact models, particle and wall shapes, and data analysis methods. Further differences can be observed in the practical implementation, including data structures, architecture, parallelization and domain decomposition techniques, user interaction, and the documentation of resources. This study compares, verifies, and benchmarks nine widely-used software frameworks. Only open-source packages were considered, as these are freely available and their underlying algorithms can be reviewed, edited, and tested. The benchmark consists of three common bulk processes: silo emptying, drum mixing, and particle impact. To keep it simple and comparable, only standard features were used, such as spherical particles and the Hertz-Mindlin model for dry contacts. Scripts for running the benchmarks in each software are provided as a dataset.</p

    Comparing open-source DEM frameworks for simulations of common bulk processes

    Get PDF
    Multiple software frameworks based on the Discrete Element Method (DEM) are available for simulating granular materials. All of them employ the same principles of explicit time integration, with each time step consisting of three main steps: contact detection, calculation of interactions, and integration of the equations of motion. However, there exist significant algorithmic differences, such as the choice of contact models, particle and wall shapes, and data analysis methods. Further differences can be observed in the practical implementation, including data structures, architecture, parallelization and domain decomposition techniques, user interaction, and the documentation of resources.This study compares, verifies, and benchmarks nine widely-used software frameworks. Only open-source packages were considered, as these are freely available and their underlying algorithms can be reviewed, edited, and tested. The benchmark consists of three common bulk processes: silo emptying, drum mixing, and particle impact. To keep it simple and comparable, only standard features were used, such as spherical particles and the Hertz-Mindlin model for dry contacts. Scripts for running the benchmarks in each software are provided as a dataset

    ImpZ: a new photometric redshift code for galaxies and quasars

    Get PDF
    We present a combined galaxy-quasar approach to template-fitting photometric redshift techniques and show the method to be a powerful one. The code (ImpZ) is presented, developed and applied to two spectroscopic redshift catalogues, namely the Isaac Newton Telescope Wide Angle Survey ELAIS N1 and N2 fields and the Chandra Deep Field North. In particular, optical size information is used to improve the redshift determination. The success of the code is shown to be very good with Delta z/(1+z) constrained to within 0.1 for 92 per cent of the galaxies in our sample. The extension of template-fitting to quasars is found to be reasonable with Delta z/(1+z) constrained to within 0.25 for 68 per cent of the quasars in our sample. Various template extensions into the far-UV are also tested.Comment: 21 pages. MNRAS in press. Minor alterations to match MNRAS final proo

    The association of the ankle-brachial index with incident coronary heart disease: the Atherosclerosis Risk In Communities (ARIC) study, 1987-2001

    Get PDF
    Abstract Background Peripheral arterial disease (PAD), defined by a low ankle-brachial index (ABI), is associated with an increased risk of cardiovascular events, but the risk of coronary heart disease (CHD) over the range of the ABI is not well characterized, nor described for African Americans. Methods The ABI was measured in 12186 white and African American men and women in the Atherosclerosis Risk in Communities Study in 1987–89. Fatal and non-fatal CHD events were ascertained through annual telephone contacts, surveys of hospital discharge lists and death certificate data, and clinical examinations, including electrocardiograms, every 3 years. Participants were followed for a median of 13.1 years. Age- and field-center-adjusted hazard ratios (HRs) were estimated using Cox regression models. Results Over a median 13.1 years follow-up, 964 fatal or non-fatal CHD events accrued. In whites, the age- and field-center-adjusted CHD hazard ratio (HR, 95% CI) for PAD (ABI 1.0, in all race-gender subgroups. The association between the ABI and CHD relative risk was similar for men and women in both race groups. A 0.10 lower ABI increased the CHD hazard by 25% (95% CI 17–34%) in white men, by 20% (8–33%) in white women, by 34% (19–50%) in African American men, and by 32% (17–50%) in African American women. Conclusion African American members of the ARIC cohort had higher prevalences of PAD and greater risk of CHD associated with ABI-defined PAD than did white participants. Unlike in other cohorts, in ARIC the CHD risk failed to increase at high (>1.3) ABI values. We conclude that at this time high ABI values should not be routinely considered a marker for increased CVD risk in the general population. Further research is needed on the value of the ABI at specific cutpoints for risk stratification in the context of traditional risk factors

    Discovery of a compact gas-rich DLA galaxy at z = 2.2: evidences for a starburst-driven outflow

    Full text link
    We present the detection of Ly-alpha, [OIII] and H-alpha emission associated with an extremely strong DLA system (N(HI) = 10^22.10 cm^-2) at z=2.207 towards the quasar SDSS J113520-001053. This is the largest HI column density ever measured along a QSO line of sight, though typical of what is seen in GRB-DLAs. This absorption system also classifies as ultrastrong MgII system with W2796_r=3.6 A. The mean metallicity of the gas ([Zn/H]=-1.1) and dust depletion factors ([Zn/Fe]=0.72, [Zn/Cr]=0.49) are consistent with (and only marginally larger than) the mean values found in the general QSO-DLA population. The [OIII]-Ha emitting region has a very small impact parameter with respect to the QSO line of sight, b=0.1", and is unresolved. From the Ha line, we measure SFR=25 Msun/yr. The Ly-a line is double-peaked and is spatially extended. More strikingly, the blue and red Ly-a peaks arise from distinct regions extended over a few kpc on either side of the star-forming region. We propose that this is the consequence of Ly-a transfer in outflowing gas. The presence of starburst-driven outflows is also in agreement with the large SFR together with a small size and low mass of the galaxy (Mvir~10^10 Msun). From the stellar UV continuum luminosity of the galaxy, we estimate an age of at most a few 10^7 yr, again consistent with a recent starburst scenario. We interpret the data as the observation of a young, gas rich, compact starburst galaxy, from which material is expelled through collimated winds powered by the vigorous star formation activity. We substantiate this picture by modelling the radiative transfer of Ly-a photons in the galactic counterpart. Though our model (a spherical galaxy with bipolar outflowing jets) is a simplistic representation of the true gas distribution and velocity field, the agreement between the observed and simulated properties is particularly good. [abridged]Comment: 15 pages, 18 figures, 4 tables, accepted for publication in Astronomy and Astrophysic

    Of mongooses and mitigation: ecological analogues to geoengineering

    Get PDF
    Anthropogenic global warming is a growing environmental problem resulting from unintentional human intervention in the global climate system. If employed as a response strategy, geoengineering would represent an additional intentional human intervention in the climate system, with the intent of decreasing net climate impacts. There is a rich and fascinating history of human intervention in environmental systems, with many specific examples from ecology of deliberate human intervention aimed at correcting or decreasing the impact of previous unintentionally created problems. Additional interventions do not always bring the intended results, and in many cases there is evidence that net impacts have increased with the degree of human intervention. In this letter, we report some of the examples in the scientific literature that have documented such human interventions in environmental systems, which may serve as analogues to geoengineering. We argue that a high degree of system understanding is required for increased intervention to lead to decreased impacts. Given our current level of understanding of the climate system, it is likely that the result of at least some geoengineering efforts would follow previous ecological examples where increased human intervention has led to an overall increase in negative environmental consequences
    • 

    corecore