492 research outputs found

    Contribution of ecotoxicological tests in the evaluation of soil bioremediation efficiency

    Get PDF
    Clean-up of contaminated soils became a high priority only recently. Several techniques have been developed forthis purpose such as chemical, physical, thermic or microbiological methods. Efficiency of the remediation can be estimated using two approaches : a chemical specific approach and a toxicity-based approach. So far, the efficiency of the decontamination process was based essentially on chemical analyses which does not integrale the toxicity of all the soil contaminants and does not give a response on effects caused by the bioavailable fraction of these contaminants Às the toxicity-based approach. In the present study, bioremediation efficiency of a soil contaminated by 4-chlorobiphenyl was evaluated using chemical and biological analyses. Experiments were carried out in microcosms contaminated at a rate of 1 g/kg. Control microcosms without specific degrader were performed simultaneously. Acute toxicity to earthworms and inhibition of growth of barley roots were selected, from previous work, Às relevant ecotoxicological test

    Carcass conformation and fat cover scores in beef cattle: A comparison of threshold linear models vs grouped data models

    Get PDF
    Background: Beef carcass conformation and fat cover scores are measured by subjective grading performed by trained technicians. The discrete nature of these scores is taken into account in genetic evaluations using a threshold model, which assumes an underlying continuous distribution called liability that can be modelled by different methods. Methods: Five threshold models were compared in this study: three threshold linear models, one including slaughterhouse and sex effects, along with other systematic effects, with homogeneous thresholds and two extensions with heterogeneous thresholds that vary across slaughterhouses and across slaughterhouse and sex and a generalised linear model with reverse extreme value errors. For this last model, the underlying variable followed a Weibull distribution and was both a log-linear model and a grouped data model. The fifth model was an extension of grouped data models with score-dependent effects in order to allow for heterogeneous thresholds that vary across slaughterhouse and sex. Goodness-of-fit of these models was tested using the bootstrap methodology. Field data included 2,539 carcasses of the Bruna dels Pirineus beef cattle breed. Results: Differences in carcass conformation and fat cover scores among slaughterhouses could not be totally captured by a systematic slaughterhouse effect, as fitted in the threshold linear model with homogeneous thresholds, and different thresholds per slaughterhouse were estimated using a slaughterhouse-specific threshold model. This model fixed most of the deficiencies when stratification by slaughterhouse was done, but it still failed to correctly fit frequencies stratified by sex, especially for fat cover, as 5 of the 8 current percentages were not included within the bootstrap interval. This indicates that scoring varied with sex and a specific sex per slaughterhouse threshold linear model should be used in order to guarantee the goodness-of-fit of the genetic evaluation model. This was also observed in grouped data models that avoided fitting deficiencies when slaughterhouse and sex effects were score-dependent. Conclusions: Both threshold linear models and grouped data models can guarantee the goodness-of-fit of the genetic evaluation for carcass conformation and fat cover, but our results highlight the need for specific thresholds by sex and slaughterhouse in order to avoid fitting deficiencies

    Accounting for genomic pre-selection in national BLUP evaluations in dairy cattle

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In future Best Linear Unbiased Prediction (BLUP) evaluations of dairy cattle, genomic selection of young sires will cause evaluation biases and loss of accuracy once the selected ones get progeny.</p> <p>Methods</p> <p>To avoid such bias in the estimation of breeding values, we propose to include information on all genotyped bulls, including the culled ones, in BLUP evaluations. Estimated breeding values based on genomic information were converted into genomic pseudo-performances and then analyzed simultaneously with actual performances. Using simulations based on actual data from the French Holstein population, bias and accuracy of BLUP evaluations were computed for young sires undergoing progeny testing or genomic pre-selection. For bulls pre-selected based on their genomic profile, three different types of information can be included in the BLUP evaluations: (1) data from pre-selected genotyped candidate bulls with actual performances on their daughters, (2) data from bulls with both actual and genomic pseudo-performances, or (3) data from all the genotyped candidates with genomic pseudo-performances. The effects of different levels of heritability, genomic pre-selection intensity and accuracy of genomic evaluation were considered.</p> <p>Results</p> <p>Including information from all the genotyped candidates, i.e. genomic pseudo-performances for both selected and culled candidates, removed bias from genetic evaluation and increased accuracy. This approach was effective regardless of the magnitude of the initial bias and as long as the accuracy of the genomic evaluations was sufficiently high.</p> <p>Conclusions</p> <p>The proposed method can be easily and quickly implemented in BLUP evaluations at the national level, although some improvement is necessary to more accurately propagate genomic information from genotyped to non-genotyped animals. In addition, it is a convenient method to combine direct genomic, phenotypic and pedigree-based information in a multiple-step procedure.</p

    The 8 and 9 September 2002 flash flood event in France: a model intercomparison

    Get PDF
    Within the framework of the European Interreg IIIb Medocc program, the HYDROPTIMET project aims at the optimization of the hydrometeorological forecasting tools in the context of intense precipitation within complex topography. Therefore, some meteorological forecast models and hydrological models were tested on four Mediterranean flash-flood events. One of them occured in France where the South-eastern ridge of the French “Massif Central”, the Gard region, experienced a devastating flood on 8 and 9 September 2002. 24 people were killed during this event and the economic damage was estimated at 1.2 billion euros. To built the next generation of the hydrometeorological forecasting chain that will be able to capture such localized and fast events and the resulting discharges, the forecasted rain fields might be improved to be relevant for hydrological purposes. In such context, this paper presents the results of the evaluation methodology proposed by Yates et al. (2005) that highlights the relevant hydrological scales of a simulated rain field. Simulated rain fields of 7 meteorological model runs concerning with the French event are therefore evaluated for different accumulation times. The dynamics of these models are either based on non-hydrostatic or hydrostatic equation systems. Moreover, these models were run under different configurations (resolution, initial conditions). The classical score analysis and the areal evaluation of the simulated rain fields are then performed in order to put forward the main simulation characteristics that improve the quantitative precipitation forecast. The conclusions draw some recommendations on the value of the quantitative precipitation forecasts and way to use it for quantitative discharge forecasts within mountainous areas

    ÎČ blockers and mortality after myocardial infarction in patients without heart failure: multicentre prospective cohort study

    Get PDF
    Objective: To assess the association between early and prolonged ÎČ blocker treatment and mortality after acute myocardial infarction. Design: Multicentre prospective cohort study. Setting: Nationwide French registry of Acute ST- and non-ST-elevation Myocardial Infarction (FAST-MI) (at 223 centres) at the end of 2005. Participants: 2679 consecutive patients with acute myocardial infarction and without heart failure or left ventricular dysfunction. Main outcome measures: Mortality was assessed at 30 days in relation to early use of ÎČ blockers (≀48 hours of admission), at one year in relation to discharge prescription, and at five years in relation to one year use. Results: ÎČ blockers were used early in 77% (2050/2679) of patients, were prescribed at discharge in 80% (1783/2217), and were still being used in 89% (1230/1383) of those alive at one year. Thirty day mortality was lower in patients taking early ÎČ blockers (adjusted hazard ratio 0.46, 95% confidence interval 0.26 to 0.82), whereas the hazard ratio for one year mortality associated with ÎČ blockers at discharge was 0.77 (0.46 to 1.30). Persistence of ÎČ blockers at one year was not associated with lower five year mortality (hazard ratio 1.19, 0.65 to 2.18). In contrast, five year mortality was lower in patients continuing statins at one year (hazard ratio 0.42, 0.25 to 0.72) compared with those discontinuing statins. Propensity score and sensitivity analyses showed consistent results. Conclusions: Early ÎČ blocker use was associated with reduced 30 day mortality in patients with acute myocardial infarction, and discontinuation of ÎČ blockers at one year was not associated with higher five year mortality. These findings question the utility of prolonged ÎČ blocker treatment after acute myocardial infarction in patients without heart failure or left ventricular dysfunction. Trial registration: Clinical trials NCT00673036

    The Evolution of Bat Vestibular Systems in the Face of Potential Antagonistic Selection Pressures for Flight and Echolocation

    Get PDF
    PMCID: PMC3634842This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Early diagnosis of acute coronary syndrome.

    Get PDF
    The diagnostic evaluation of acute chest pain has been augmented in recent years by advances in the sensitivity and precision of cardiac troponin assays, new biomarkers, improvements in imaging modalities, and release of new clinical decision algorithms. This progress has enabled physicians to diagnose or rule-out acute myocardial infarction earlier after the initial patient presentation, usually in emergency department settings, which may facilitate prompt initiation of evidence-based treatments, investigation of alternative diagnoses for chest pain, or discharge, and permit better utilization of healthcare resources. A non-trivial proportion of patients fall in an indeterminate category according to rule-out algorithms, and minimal evidence-based guidance exists for the optimal evaluation, monitoring, and treatment of these patients. The Cardiovascular Round Table of the ESC proposes approaches for the optimal application of early strategies in clinical practice to improve patient care following the review of recent advances in the early diagnosis of acute coronary syndrome. The following specific 'indeterminate' patient categories were considered: (i) patients with symptoms and high-sensitivity cardiac troponin 99th percentile but without dynamic change; and (iv) patients with symptoms and high-sensitivity troponin >99th percentile and dynamic change but without coronary plaque rupture/erosion/dissection. Definitive evidence is currently lacking to manage these patients whose early diagnosis is 'indeterminate' and these areas of uncertainty should be assigned a high priority for research
    • 

    corecore