3,115 research outputs found

    Comparative analysis of the seismic hazard of Central China

    Get PDF
    Seismic hazard assessment is globally recognised as a tool in identifying levels of earthquake ground shaking within an area. However, methodologies for seismic hazard calculation are wide ranging and produce variations in results and maps. As a case study seismic hazard and results from Gumbel’s method of extremes are determined for the area of greatest intraplate seismicity in China covering the provinces of Gansu, Sichuan and Yunnan. This area is termed the North-South Seismic Zone. Devastating earthquakes in this zone include the 8.4 MS 1920 Haiyuan earthquake causing over 220,000 deaths and the 1996 Lijiang earthquake. Most recently the 2008 Wenchuan earthquake caused over 69,000 deaths with more than 18,000 people missing. These results and seismic hazard maps are compared with the publicly available maps of GSHAP and the national seismic hazard map of China at the level of 10% probability of exceedance in a 50 year period. The distributions of high and low hazard areas are similar and adjacent to the major thrust and strike-slip faults dominating in this area. However, results from the Gumbel method of extremes suggest that the hazard levels within certain areas are slightly different compared to the other two models. This is primarily because the Gumbel methodology is based on determining hazard from earthquakes that have already taken place whereas the other two models determine maximum hazard levels in areas which may exhibit no previous strong hazard. Additionally the Chinese national hazard map does not indicate levels of ground shaking intensity greater than IX in detail, whereas such zones are identified using the extreme value method. This work should be used to strengthen the seismic hazard analysis of this area of China

    A journey into e-resource administration hell

    Get PDF
    The author discusses the administrative problems which can still occur when looking after a large and complex portfolio of electronic resources, and focuses on some of the recurring ‘nightmares’ involving e-journals in particular. Amongst the subjects discussed are lost archives, activation codes which change without anyone being told, unreasonable expiry dates, poor service, wandering URLs, lack of publicity, failure to keep licensing conditions, and title changes. The article ends with a look at some emerging examples of excellent practice to do with e-journal management, proving all parties involved can work together to ensure a smooth and efficient service

    Predicting \u3cem\u3eLeptodactylus\u3c/em\u3e (Amphibia, Anura, Leptodactylidae) Distributions: Broad-Ranging Versus Patchily Distributed Species Using a Presence-Only Environmental Niche Modeling Technique

    Get PDF
    Locality data available for many, if not most, species of Neotropical frogs are based on written descriptions of the collecting sites, not on GPS device determined coordinate data. The pre-GPS device data are imprecise relative to GPS data. Niche modeling is a powerful technique for predicting geographic distributions that provides the best results when the locality data are precise. The purpose of this study is to determine whether imprecise historical locality data are sufficient such that niche modeling techniques can yield realistic new insights to species-level distributions. Two sets of frogs of the genus Leptodactylus that have known different kinds of distributions are evaluated: two species with broad, presumably continuous distributions, and four species known to occur in patchy, disjunct habitats in South America. BIOCLIM, a presence-only environmental niche modeling algorithm, was used to define suitable occupancy areas based on multiple sets of environmental parameters that include: monthly mean, max, and min temperatures, and monthly precipitation. A Nature Conservancy - Natureserve ecoregion layer and a high resolution elevation layer were also included in the analyses. Our analyses yield new realistic insights and questions regarding distributions of the Leptodactylus species we evaluated. We recommend incorporation of the Nature Conservancy- Natureserve layer to evaluate Neotropical distributions, as the layer gave much more robust results than use of only the climatic variable analyses

    Seismic hazard and risk in Shanghai and estimation of expected building damage

    Get PDF
    The People's Republic of China is in the process of rapid demographic, economic and urban change including nationwide engineering and building construction at an unprecedented scale. The mega-city of Shanghai is at the centre of China's modernisation. Rapid urbanisation and building growth have increased the exposure of people and property to natural disasters. The seismic hazard of Shanghai and its vicinity is presented from a seismogenic free-zone methodology. A PGA value of 49 cm s-2 and a maximum intensity value of VII for the Chinese Seismic Intensity Scale (a scale similar to the Modified Mercalli) for a 99% probability of non-exceedance in 50 years are determined for Shanghai city. The potential building damage for three independent districts of the city centre named Putuo, Nanjing Road and Pudong are calculated using damage vulnerability matrices. It is found that old civil houses of brick and timber are the most vulnerable buildings with potentially a mean probability value of 7.4% of this building structure type exhibiting the highest damage grade at intensity VII

    An Inversion Method for Measuring Beta in Large Redshift Surveys

    Full text link
    A precision method for determining the value of Beta= Omega_m^{0.6}/b, where b is the galaxy bias parameter, is presented. In contrast to other existing techniques that focus on estimating this quantity by measuring distortions in the redshift space galaxy-galaxy correlation function or power spectrum, this method removes the distortions by reconstructing the real space density field and determining the value of Beta that results in a symmetric signal. To remove the distortions, the method modifies the amplitudes of a Fourier plane-wave expansion of the survey data parameterized by Beta. This technique is not dependent on the small-angle/plane-parallel approximation and can make full use of large redshift survey data. It has been tested using simulations with four different cosmologies and returns the value of Beta to +/- 0.031, over a factor of two improvement over existing techniques.Comment: 16 pages including 6 figures Submitted to The Astrophysical Journa

    Time-modified Confounding

    Get PDF
    According to the authors, time-modified confounding occurs when the causal relation between a time-fixed or time-varying confounder and the treatment or outcome changes over time. A key difference between previously described time-varying confounding and the proposed time-modified confounding is that, in the former, the values of the confounding variable change over time while, in the latter, the effects of the confounder change over time. Using marginal structural models, the authors propose an approach to account for time-modified confounding when the relation between the confounder and treatment is modified over time. An illustrative example and simulation show that, when time-modified confounding is present, a marginal structural model with inverse probability-of-treatment weights specified to account for time-modified confounding remains approximately unbiased with appropriate confidence limit coverage, while models that do not account for time-modified confounding are biased. Correct specification of the treatment model, including accounting for potential variation over time in confounding, is an important assumption of marginal structural models. When the effect of confounders on either the treatment or outcome changes over time, time-modified confounding should be considered

    Overadjustment Bias and Unnecessary Adjustment in Epidemiologic Studies

    Get PDF
    Overadjustment is defined inconsistently. This term is meant to describe control (eg, by regression adjustment, stratification, or restriction) for a variable that either increases net bias or decreases precision without affecting bias. We define overadjustment bias as control for an intermediate variable (or a descending proxy for an intermediate variable) on a causal path from exposure to outcome. We define unnecessary adjustment as control for a variable that does not affect bias of the causal relation between exposure and outcome but may affect its precision. We use causal diagrams and an empirical example (the effect of maternal smoking on neonatal mortality) to illustrate and clarify the definition of overadjustment bias, and to distinguish overadjustment bias from unnecessary adjustment. Using simulations, we quantify the amount of bias associated with overadjustment. Moreover, we show that this bias is based on a different causal structure from confounding or selection biases. Overadjustment bias is not a finite sample bias, while inefficiencies due to control for unnecessary variables are a function of sample size
    • …
    corecore