29 research outputs found
Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?
© 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio
Vitamin d status predicts 30 day mortality in hospitalised cats
Vitamin D insufficiency, defined as low serum concentrations of the major circulating form of vitamin D, 25 hydroxyvitamin D (25(OH)D), has been associated with the development of numerous infectious, inflammatory, and neoplastic disorders in humans. In addition, vitamin D insufficiency has been found to be predictive of mortality for many disorders. However, interpretation of human studies is difficult since vitamin D status is influenced by many factors, including diet, season, latitude, and exposure to UV radiation. In contrast, domesticated cats do not produce vitamin D cutaneously, and most cats are fed a commercial diet containing a relatively standard amount of vitamin D. Consequently, domesticated cats are an attractive model system in which to examine the relationship between serum 25(OH)D and health outcomes. The hypothesis of this study was that vitamin D status would predict short term, all-cause mortality in domesticated cats. Serum concentrations of 25(OH)D, together with a wide range of other clinical, hematological, and biochemical parameters, were measured in 99 consecutively hospitalised cats. Cats which died within 30 days of initial assessment had significantly lower serum 25(OH)D concentrations than cats which survived. In a linear regression model including 12 clinical variables, serum 25(OH)D concentration in the lower tertile was significantly predictive of mortality. The odds ratio of mortality within 30 days was 8.27 (95% confidence interval 2.54-31.52) for cats with a serum 25(OH)D concentration in the lower tertile. In conclusion, this study demonstrates that low serum 25(OH)D concentration status is an independent predictor of short term mortality in cats