30,414 research outputs found

    Rigorous statistical detection and characterization of a deviation from the Gutenberg-Richter distribution above magnitude 8 in subduction zones

    Full text link
    We present a quantitative statistical test for the presence of a crossover c0 in the Gutenberg-Richter distribution of earthquake seismic moments, separating the usual power law regime for seismic moments less than c0 from another faster decaying regime beyond c0. Our method is based on the transformation of the ordered sample of seismic moments into a series with uniform distribution under condition of no crossover. The bootstrap method allows us to estimate the statistical significance of the null hypothesis H0 of an absence of crossover (c0=infinity). When H0 is rejected, we estimate the crossover c0 using two different competing models for the second regime beyond c0 and the bootstrap method. For the catalog obtained by aggregating 14 subduction zones of the Circum Pacific Seismic Belt, our estimate of the crossover point is log(c0) =28.14 +- 0.40 (c0 in dyne-cm), corresponding to a crossover magnitude mW=8.1 +- 0.3. For separate subduction zones, the corresponding estimates are much more uncertain, so that the null hypothesis of an identical crossover for all subduction zones cannot be rejected. Such a large value of the crossover magnitude makes it difficult to associate it directly with a seismogenic thickness as proposed by many different authors in the past. Our measure of c0 may substantiate the concept that the localization of strong shear deformation could propagate significantly in the lower crust and upper mantle, thus increasing the effective size beyond which one should expect a change of regime.Comment: pdf document of 40 pages including 5 tables and 19 figure

    Measurement Method for Evaluating the Probability Distribution of the Quality Factor of Mode-Stirred Reverberation Chambers

    Full text link
    An original experimental method for determining the empirical probability distribution function (PDF) of the quality factor (Q) of a mode-stirred reverberation chamber is presented. Spectral averaging of S-parameters across a relatively narrow frequency interval at a single pair of locations for the transmitting and receiving antennas is applied to estimate the stored and dissipated energy in the cavity, avoiding the need for spatial scanning to obtain spatial volume or surface averages. The effective number of simultaneously excited cavity modes per stir state, M, can be estimated by fitting the empirical distribution to the parametrized theoretical distribution. The measured results support a previously developed theoretical model for the PDF of Q and show that spectral averaging over a bandwidth as small as a few hundred kHz is sufficient to obtain accurate results.Comment: submitted for publicatio

    Multi-Epoch HST Observations of IZw18: Characterization of Variable Stars at Ultra-Low Metallicities

    Full text link
    Variable stars have been identified for the first time in the very metal-poor Blue Compact dwarf galaxy IZw18, using deep multi-band (F606W, F814W)time-series photometry obtained with the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). We detected 34 candidate variable stars in the galaxy. We classify three of them as Classical Cepheids, with periods of 8.71, 125.0 and 130.3 days, respectively, and other two as long period variables with periodicities longer than a hundred days. These are the lowest metallicity Classical Cepheids known so far, thus providing the opportunity to explore and fit models of stellar pulsation fo Classical Cepheids at previously inaccessible metallicities. The period distribution of the confirmed Cepheids is markedly different from what is seen in other nearby galaxies, which is likely related to the star bursting nature of IZw18. By applying to the 8.71 days Cepheid theoretical Wesenheit (V,I) relations based on new pulsation models of Classical Cepheids specifically computed for the extremely low metallicity of this galaxy (Z=0.0004, Y=0.24), we estimate the distance modulus of IZw18 to be mu_0= 31.4pm0.2 D=19.0^{+1.8}_{-1.7}Mpc) for canonical models of Classical Cepheids, and of 31.2pm0.2 mag (D=17.4^{+1.6}_{-1.6}Mpc) using over luminous models. The theoretical modeling of the star's light curves provides mu_0=31.4pm0.1 mag, D=19.0^{+0.9}_{-0.9} Mpc, in good agreement with the results from the theoretical Wesenheit relations. These pulsation distances bracket the distance of 18.2pm1.5Mpc inferred by Aloisi et al. (2007) using the galaxy's Red Giant Branch Tip.Comment: 13 Pages, 6 Figures, accepted, Ap

    Risk Assessment for National Natural Resource Conservation Programs

    Get PDF
    This paper reviews the risk assessments prepared by the U.S. Department of Agriculture (USDA) in support of regulations implementing the Conservation Reserve Program (CRP) and Environmental Quality Incentives Program (EQIP). These two natural resource conservation programs were authorized as part of the 1996 Farm Bill. The risk assessments were required under the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994. The framework used for the assessments was appropriate, but the assessments could be improved in the areas of assessments endpoint selection, definition, and estimation. Many of the assessment endpoints were too diffuse or ill-defined to provide an adequate characterization of the program benefits. Two reasons for this lack of clarity were apparent: 1) the large, unprioritized set of natural resource conservation objectives for the two programs and 2) there is little agreement about what changes in environmental attributes caused by agriculture should be considered adverse and which may be considered negligible. There is also some "double counting" of program benefits. Although the CRP and EQIP are, in part, intended to assist agricultural producers with regulatory compliance, the resultant environmental benefits would occur absent the programs. The paper concludes with a set of recommendations for continuing efforts to conduct regulatory analyses of these major conservation programs. The central recommendation is that future risk assessments go beyond efforts to identify the natural resources at greatest risk due to agricultural production activities and instead provide scientific input for analyses of the cost-effectiveness of the conservation programs.

    The role of learning on industrial simulation design and analysis

    Full text link
    The capability of modeling real-world system operations has turned simulation into an indispensable problemsolving methodology for business system design and analysis. Today, simulation supports decisions ranging from sourcing to operations to finance, starting at the strategic level and proceeding towards tactical and operational levels of decision-making. In such a dynamic setting, the practice of simulation goes beyond being a static problem-solving exercise and requires integration with learning. This article discusses the role of learning in simulation design and analysis motivated by the needs of industrial problems and describes how selected tools of statistical learning can be utilized for this purpose

    Global Sensitivity Analysis of Stochastic Computer Models with joint metamodels

    Get PDF
    The global sensitivity analysis method, used to quantify the influence of uncertain input variables on the response variability of a numerical model, is applicable to deterministic computer code (for which the same set of input variables gives always the same output value). This paper proposes a global sensitivity analysis methodology for stochastic computer code (having a variability induced by some uncontrollable variables). The framework of the joint modeling of the mean and dispersion of heteroscedastic data is used. To deal with the complexity of computer experiment outputs, non parametric joint models (based on Generalized Additive Models and Gaussian processes) are discussed. The relevance of these new models is analyzed in terms of the obtained variance-based sensitivity indices with two case studies. Results show that the joint modeling approach leads accurate sensitivity index estimations even when clear heteroscedasticity is present

    Reliability analysis and micromechanics: A coupled approach for composite failure prediction

    Get PDF
    This work aims at associating two classical approaches for the design of composite materials: first, reliability methods that allow to account for the various uncertainties involved in the composite materials behaviour and lead to a rational estimation of their reliability level; on the other hand, micromechanics that derive macroscopic constitutive laws from micromechanical features. Such approach relies on the introduction of variabilities defined at the microscale and on the investigation of their consequences on the material macroscopic response through an homogenization scheme. Precisely, we propose here a systematic treatment of variability which involves a strong link between micro- and macroscales and provides a more exhaustive analysis of the influence of uncertainties. The paper intends to explain the main steps of such coupling and demonstrate its interests for material engineering, especially for constitutive modelling and composite materials optimization. An application case is developed throughout on the failure of unidirectional carbon fibre-reinforced composites with a comparative analysis between experimental data and simulation results
    • 

    corecore