92 research outputs found

    Dispelling urban myths about default uncertainty factors in chemical risk assessment - Sufficient protection against mixture effects?

    Get PDF
    © 2013 Martin et al.; licensee BioMed Central LtdThis article has been made available through the Brunel Open Access Publishing Fund.Assessing the detrimental health effects of chemicals requires the extrapolation of experimental data in animals to human populations. This is achieved by applying a default uncertainty factor of 100 to doses not found to be associated with observable effects in laboratory animals. It is commonly assumed that the toxicokinetic and toxicodynamic sub-components of this default uncertainty factor represent worst-case scenarios and that the multiplication of those components yields conservative estimates of safe levels for humans. It is sometimes claimed that this conservatism also offers adequate protection from mixture effects. By analysing the evolution of uncertainty factors from a historical perspective, we expose that the default factor and its sub-components are intended to represent adequate rather than worst-case scenarios. The intention of using assessment factors for mixture effects was abandoned thirty years ago. It is also often ignored that the conservatism (or otherwise) of uncertainty factors can only be considered in relation to a defined level of protection. A protection equivalent to an effect magnitude of 0.001-0.0001% over background incidence is generally considered acceptable. However, it is impossible to say whether this level of protection is in fact realised with the tolerable doses that are derived by employing uncertainty factors. Accordingly, it is difficult to assess whether uncertainty factors overestimate or underestimate the sensitivity differences in human populations. It is also often not appreciated that the outcome of probabilistic approaches to the multiplication of sub-factors is dependent on the choice of probability distributions. Therefore, the idea that default uncertainty factors are overly conservative worst-case scenarios which can account both for the lack of statistical power in animal experiments and protect against potential mixture effects is ill-founded. We contend that precautionary regulation should provide an incentive to generate better data and recommend adopting a pragmatic, but scientifically better founded approach to mixture risk assessment. © 2013 Martin et al.; licensee BioMed Central Ltd.Oak Foundatio

    Left gaze bias in humans, rhesus monkeys and domestic dogs

    Get PDF
    While viewing faces, human adults often demonstrate a natural gaze bias towards the left visual field, that is, the right side of the viewee’s face is often inspected first and for longer periods. Using a preferential looking paradigm, we demonstrate that this bias is neither uniquely human nor limited to primates, and provide evidence to help elucidate its biological function within a broader social cognitive framework. We observed that 6-month-old infants showed a wider tendency for left gaze preference towards objects and faces of different species and orientation, while in adults the bias appears only towards upright human faces. Rhesus monkeys showed a left gaze bias towards upright human and monkey faces, but not towards inverted faces. Domestic dogs, however, only demonstrated a left gaze bias towards human faces, but not towards monkey or dog faces, nor to inanimate object images. Our findings suggest that face- and species-sensitive gaze asymmetry is more widespread in the animal kingdom than previously recognised, is not constrained by attentional or scanning bias, and could be shaped by experience to develop adaptive behavioural significance

    Prevalence of inflammatory bowel disease among coeliac disease patients in a Hungarian coeliac centre

    Get PDF
    BACKGROUND: Celiac disease, Crohn disease and ulcerative colitis are inflammatory disorders of the gastrointestinal tract with some common genetic, immunological and environmental factors involved in their pathogenesis. Several research shown that patients with celiac disease have increased risk of developing inflammatory bowel disease when compared with that of the general population. The aim of this study is to determine the prevalence of inflammatory bowel disease in our celiac patient cohort over a 15-year-long study period. METHODS: To diagnose celiac disease, serological tests were used, and duodenal biopsy samples were taken to determine the degree of mucosal injury. To set up the diagnosis of inflammatory bowel disease, clinical parameters, imaging techniques, colonoscopy histology were applied. DEXA for measuring bone mineral density was performed on every patient. RESULTS: In our material, 8/245 (3,2 %) coeliac disease patients presented inflammatory bowel disease (four males, mean age 37, range 22-67), 6/8 Crohn's disease, and 2/8 ulcerative colitis. In 7/8 patients the diagnosis of coeliac disease was made first and inflammatory bowel disease was identified during follow-up. The average time period during the set-up of the two diagnosis was 10,7 years. Coeliac disease serology was positive in all cases. The distribution of histology results according to Marsh classification: 1/8 M1, 2/8 M2, 3/8 M3a, 2/8 M3b. The distribution according to the Montreal classification: 4/6 Crohn's disease patients are B1, 2/6 Crohn's disease patients are B2, 2/2 ulcerative colitis patients are S2. Normal bone mineral density was detected in 2/8 case, osteopenia in 4/8 and osteoporosis in 2/8 patients. CONCLUSIONS: Within our cohort of patients with coeliac disease, inflammatory bowel disease was significantly more common (3,2 %) than in the general population

    Implementation of a program for type 2 diabetes based on the Chronic Care Model in a hospital-centered health care system: "the Belgian experience"

    Get PDF
    Background: Most research publications on Chronic Care Model (CCM) implementation originate from organizations or countries with a well-structured primary health care system. Information about efforts made in countries with a less well-organized primary health care system is scarce. In 2003, the Belgian National Institute for Health and Disability Insurance commissioned a pilot study to explore how care for type 2 diabetes patients could be organized in a more efficient way in the Belgian healthcare setting, a setting where the organisational framework for chronic care is mainly hospital-centered. Methods: Process evaluation of an action research project (2003-2007) guided by the CCM in a well-defined geographical area with 76,826 inhabitants and an estimated number of 2,300 type 2 diabetes patients. In consultation with the region a program for type 2 diabetes patients was developed. The degree of implementation of the CCM in the region was assessed using the Assessment of Chronic Illness Care survey (ACIC). A multimethod approach was used to evaluate the implementation process. The resulting data were triangulated in order to identify the main facilitators and barriers encountered during the implementation process. Results: The overall ACIC score improved from 1.45 (limited support) at the start of the study to 5.5 (basic support) at the end of the study. The establishment of a local steering group and the appointment of a program manager were crucial steps in strengthening primary care. The willingness of a group of well-trained and motivated care providers to invest in quality improvement was an important facilitator. Important barriers were the complexity of the intervention, the lack of quality data, inadequate information technology support, the lack of commitment procedures and the uncertainty about sustainable funding. Conclusion: Guided by the CCM, this study highlights the opportunities and the bottlenecks for adapting chronic care delivery in a primary care system with limited structure. The study succeeded in achieving a considerable improvement of the overall support for diabetes patients but further improvement requires a shift towards system thinking among policy makers. Currently primary care providers lack the opportunities to take up full responsibility for chronic care

    Change in dominance determines herbivore effects on plant biodiversity

    Get PDF
    Herbivores alter plant biodiversity (species richness) in many of the world’s ecosystems, but the magnitude and the direction of herbivore effects on biodiversity vary widely within and among ecosystems. One current theory predicts that herbivores enhance plant biodiversity at high productivity but have the opposite effect at low productivity. Yet, empirical support for the importance of site productivity as a mediator of these herbivore impacts is equivocal. Here, we synthesize data from 252 large-herbivore exclusion studies, spanning a 20-fold range in site productivity, to test an alternative hypothesis—that herbivore-induced changes in the competitive environment determine the response of plant biodiversity to herbivory irrespective of productivity. Under this hypothesis, when herbivores reduce the abundance (biomass, cover) of dominant species (for example, because the dominant plant is palatable), additional resources become available to support new species, thereby increasing biodiversity. By contrast, if herbivores promote high dominance by increasing the abundance of herbivory-resistant, unpalatable species, then resource availability for other species decreases reducing biodiversity. We show that herbivore-induced change in dominance, independent of site productivity or precipitation (a proxy for productivity), is the best predictor of herbivore effects on biodiversity in grassland and savannah sites. Given that most herbaceous ecosystems are dominated by one or a few species, altering the competitive environment via herbivores or by other means may be an effective strategy for conserving biodiversity in grasslands and savannahs globally

    An integrative review of the methodology and findings regarding dietary adherence in end stage kidney disease

    Full text link
    corecore