845 research outputs found

    Habitat Fragmentation, Variable Edge Effects, and the Landscape-Divergence Hypothesis

    Get PDF
    Edge effects are major drivers of change in many fragmented landscapes, but are often highly variable in space and time. Here we assess variability in edge effects altering Amazon forest dynamics, plant community composition, invading species, and carbon storage, in the world's largest and longest-running experimental study of habitat fragmentation. Despite detailed knowledge of local landscape conditions, spatial variability in edge effects was only partially foreseeable: relatively predictable effects were caused by the differing proximity of plots to forest edge and varying matrix vegetation, but windstorms generated much random variability. Temporal variability in edge phenomena was also only partially predictable: forest dynamics varied somewhat with fragment age, but also fluctuated markedly over time, evidently because of sporadic droughts and windstorms. Given the acute sensitivity of habitat fragments to local landscape and weather dynamics, we predict that fragments within the same landscape will tend to converge in species composition, whereas those in different landscapes will diverge in composition. This ‘landscape-divergence hypothesis’, if generally valid, will have key implications for biodiversity-conservation strategies and for understanding the dynamics of fragmented ecosystems

    Variations in hospital standardised mortality ratios (HSMR) as a result of frequent readmissions

    Get PDF
    BACKGROUND: We investigated the impact that variations in the frequency of readmissions had upon a hospital's standardised mortality ratio (HSMR). An adapted HSMR model was used in the study. Our calculations were based on the admissions of 70 hospitals in The Netherlands during the years 2005 to 2009. METHODS: Through a retrospective analysis of routinely collected hospital data, we calculated standardised in-hospital mortality ratios both by hospital and by diagnostic group (H/SMRs) using two different models. The first was the Dutch 2010 model while the second was the same model but with an additional adjustment for the readmission frequency. We compared H/SMR outcomes and the corresponding quality metrics in order to test discrimination (c-statistics), calibration (Hosmer-Lemeshow) and explanatory power (pseudo-R2 statistic) for both models. RESULTS: The SMR outcomes for model 2 compared to model 1, varied between -39% and +110%. On the HSMR level these variations ranged from -12% to +11%. There was a substantial disagreement between the models with respect to significant death on the SMR level as well as the HSMR level (~ 20%). All quality metrics comparing both models were in favour of model 2. The susceptibility to adjustment for readmission increased for longer review periods. CONCLUSIONS: The 2010 HSMR model for the Netherlands was sensitive to adjustment for the frequency of readmissions. A model without this adjustment, as opposed to a model with the adjustment, produced substantially different HSMR outcomes. The uncertainty introduced by these differences exceeded the uncertainty indicated by the 95% confidence intervals. Therefore an adjustment for the frequency of readmissions should be considered in The Netherlands, since such a model showed more favourable quality metric characteristics compared to a model without such an adjustment. Other countries could well benefit from a similar adjustment to their models. A review period of the data collected over the last three years, at least, is advisable. (aut.ref.

    Feasibility of a liver transcriptomics approach to assess bovine treatment with the prohormone dehydroepiandrosterone (DHEA)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Within the European Union the use of growth promoting agents in animal production is prohibited. Illegal use of natural prohormones like dehydroepiandrosterone (DHEA) is hard to prove since prohormones are strongly metabolized <it>in vivo</it>. In the present study, we investigated the feasibility of a novel effect-based approach for monitoring abuse of DHEA. Changes in gene expression profiles were studied in livers of bull calves treated orally (PO) or intramuscularly (IM) with 1000 mg DHEA versus two control groups, using bovine 44K DNA microarrays. In contrast to controlled genomics studies, this work involved bovines purchased at the local market on three different occasions with ages ranging from 6 to 14 months, thereby reflecting the real life inter-animal variability due to differences in age, individual physiology, season and diet.</p> <p>Results</p> <p>As determined by principal component analysis (PCA), large differences in liver gene expression profiles were observed between treated and control animals as well as between the two control groups. When comparing the gene expression profiles of PO and IM treated animals to that of all control animals, the number of significantly regulated genes (p-value <0.05 and a fold change >1.5) was 23 and 37 respectively. For IM and PO treated calves, gene sets were generated of genes that were significantly regulated compared to one control group and validated versus the other control group using Gene Set Enrichment Analysis (GSEA). This cross validation, showed that 6 out of the 8 gene sets were significantly enriched in DHEA treated animals when compared to an 'independent' control group.</p> <p>Conclusions</p> <p>This study showed that identification and application of genomic biomarkers for screening of (pro)hormone abuse in livestock production is substantially hampered by biological variation. On the other hand, it is demonstrated that comparison of pre-defined gene sets versus the whole genome expression profile of an animal allows to distinguish DHEA treatment effects from variations in gene expression due to inherent biological variation. Therefore, DNA-microarray expression profiling together with statistical tools like GSEA represent a promising approach to screen for (pro)hormone abuse in livestock production. However, a better insight in the genomic variability of the control population is a prerequisite in order to define growth promoter specific gene sets that can be used as robust biomarkers in daily practice.</p

    Scenario planning for the Edinburgh city region

    Get PDF
    This paper examines the application of scenario planning techniques to the detailed and daunting challenge of city re-positioning when policy makers are faced with a heavy history and a complex future context. It reviews a process of scenario planning undertaken in the Edinburgh city region, exploring the scenario process and its contribution to strategies and policies for city repositioning. Strongly rooted in the recent literature on urban and regional economic development, the text outlines how key individuals and organisations involved in the process participated in far-reaching analyses of the possible future worlds in which the Edinburgh city region might find itself

    Do coefficients of variation of response propensities approximate non‐response biases during survey data collection?

    Get PDF
    We evaluate the utility of coefficients of variation of response propensities (CVs) as measures of risks of survey variable non‐response biases when monitoring survey data collection. CVs quantify variation in sample response propensities estimated given a set of auxiliary attribute covariates observed for all subjects. If auxiliary covariates and survey variables are correlated, low levels of propensity variation imply low bias risk. CVs can also be decomposed to measure associations between auxiliary covariates and propensity variation, informing collection method modifications and post‐collection adjustments to improve dataset quality. Practitioners are interested in such approaches to managing bias risks, but risk indicator performance has received little attention. We describe relationships between CVs and expected biases and how they inform quality improvements during and post‐data collection, expanding on previous work. Next, given auxiliary information from the concurrent 2011 UK census and details of interview attempts, we use CVs to quantify the representativeness of the UK Labour Force Survey dataset during data collection. Following this, we use survey data to evaluate inference based on CVs concerning survey variables with analogues measuring the same quantities among the auxiliary covariate set. Given our findings, we then offer advice on using CVs to monitor survey data collection

    Data set representativeness during data collection in three UK social surveys: generalizability and the effects of auxiliary covariate choice

    Get PDF
    We consider the use of representativeness indicators to monitor risks of non‐response bias during survey data collection. The analysis benefits from use of a unique data set linking call record paradata from three UK social surveys to census auxiliary attribute information on sample households. We investigate the utility of census information for this purpose and the performance of representativeness indicators (the R‐indicator and the coefficient of variation of response propensities) in monitoring representativeness over call records. We also investigate the extent and effects of misspecification of auxiliary covariate sets used in indicator computation and design phase capacity points in call records beyond which survey data set improvements are minimal, and whether such points are generalizable across surveys. Given our findings, we then offer guidance to survey practitioners on the use of such methods and implications for optimizing data collection and efficiency savings

    Properties of the bridge sampler with a focus on splitting the MCMC sample

    Get PDF
    Computation of normalizing constants is a fundamental mathematical problem in various disciplines, particularly in Bayesian model selection problems. A sampling-based technique known as bridge sampling (Meng and Wong in Stat Sin 6(4):831–860, 1996) has been found to produce accurate estimates of normalizing constants and is shown to possess good asymptotic properties. For small to moderate sample sizes (as in situations with limited computational resources), we demonstrate that the (optimal) bridge sampler produces biased estimates. Specifically, when one density (we denote as p2) is constructed to be close to the target density (we denote as p1) using method of moments, our simulation-based results indicate that the correlation-induced bias through the moment-matching procedure is non-negligible. More crucially, the bias amplifies as the dimensionality of the problem increases. Thus, a series of theoretical as well as empirical investigations is carried out to identify the nature and origin of the bias. We then examine the effect of sample size allocation on the accuracy of bridge sampling estimates and discovered that one possibility of reducing both the bias and standard error with a small increase in computational effort is by drawing extra samples from the moment-matched density p2 (which we assume easy to sample from), provided that the evaluation of p1 is not too expensive. We proceed to show how the simple adaptive approach we termed “splitting” manages to alleviate the correlation-induced bias at the expense of a higher standard error, irrespective of the dimensionality involved. We also slightly modified the strategy suggested by Wang et al. (Warp bridge sampling: the next generation, Preprint, 2019. arXiv:1609.07690) to address the issue of the increase in standard error due to splitting, which is later generalized to further improve the efficiency. We conclude the paper by offering our insights of the application of a combination of these adaptive methods to improve the accuracy of bridge sampling estimates in Bayesian applications (where posterior samples are typically expensive to generate) based on the preceding investigations, with an application to a practical example
    • 

    corecore