29 research outputs found

    Prescriptive variability of drugs by general practitioners

    Get PDF
    <div><p>Prescription drug spending is growing faster than any other sector of healthcare. However, very little is known about patterns of prescribing and cost of prescribing between general practices. In this study, we examined variation in prescription rates and prescription costs through time for 55 GP surgeries in Northern Ireland Western Health and Social Care Trust. Temporal changes in variability of prescribing rates and costs were assessed using the Mann–Kendall test. Outlier practices contributing to between practice variation in prescribing rates were identified with the interquartile range outlier detection method. The relationship between rates and cost of prescribing was explored with Spearman's statistics. The differences in variability and mean number of prescribing rates associated with the practice setting and socioeconomic deprivation were tested using t-test and <i>F</i>-test respectively. The largest between-practice difference in prescribing rates was observed for Apr-Jun 2015, with the number of prescriptions ranging from 3.34 to 8.36 per patient. We showed that practices with outlier prescribing rates greatly contributed to between-practice variability. The largest difference in prescribing costs was reported for Apr-Jun 2014, with the prescription cost per patient ranging from £26.4 to £64.5. In addition, the temporal changes in variability of prescribing rates and costs were shown to undergo an upward trend. We demonstrated that practice setting and socio-economic deprivation accounted for some of the between-practice variation in prescribing. Rural practices had higher between practice variability than urban practices at all time points. Practices situated in more deprived areas had higher prescribing rates but lower variability than those located in less deprived areas. Further analysis is recommended to assess if variation in prescribing can be explained by demographic characteristics of patient population and practice features. Identification of other factors contributing to prescribing variability can help us better address potential inappropriateness of prescribing.</p></div

    Generic Escherichia coli Contamination of Spinach at the Preharvest Stage: Effects of Farm Management and Environmental Factors

    Get PDF
    The objective of this study was to determine the effects of farm management and environmental factors on preharvest spinach contamination with generic Escherichia coli as an indicator of fecal contamination. A repeated cross-sectional study was conducted by visiting spinach farms up to four times per growing season over a period of 2 years (2010 to 2011). Spinach samples (n = 955) were collected from 12 spinach farms in Colorado and Texas as representative states of the Western and Southwestern United States, respectively. During each farm visit, farmers were surveyed about farm-related management and environmental factors using a questionnaire. Associations between the prevalence of generic E. coli in spinach and farm-related factors were assessed by using a multivariable logistic regression model including random effects for farm and farm visit. Overall, 6.6% of spinach samples were positive for generic E. coli. Significant risk factors for spinach contamination with generic E. coli were the proximity (within 10 miles) of a poultry farm, the use of pond water for irrigation, a >66-day period since the planting of spinach, farming on fields previously used for grazing, the production of hay before spinach planting, and the farm location in the Southwestern United States. Contamination with generic E. coli was significantly reduced with an irrigation lapse time of >5 days as well as by several factors related to field workers, including the use of portable toilets, training to use portable toilets, and the use of hand-washing stations. To our knowledge, this is the first report of an association between field workers' personal hygiene and produce contamination with generic E. coli at the preharvest level. Collectively, our findings support that practice of good personal hygiene and other good farm management practices may reduce produce contamination with generic E. coli at the preharvest level

    Simple rules can guide whether land or ocean based conservation will best benefit marine ecosystems

    Get PDF
    Coastal marine ecosystems can be managed by actions undertaken both on the land and in the ocean. Quantifying and comparing the costs and benefits of actions in both realms is therefore necessary for efficient management. Here, we quantify the link between terrestrial sediment run-off and a downstream coastal marine ecosystem, and contrast the cost-effectiveness of marine and land-based conservation actions. We use a dynamic land- and sea-scape model to determine whether limited funds should be directed to one of four alternative conservation actions – protection on land, protection in the ocean, restoration on land, or restoration in the ocean – to maximise the extent of light-dependent marine benthic habitats, across decadal time-scales. We apply the model to a case study seagrass meadow in Australia. We find that marine restoration is the most cost-effective action over decadal time-scales in this system, based on a conservative estimate of the rate at which seagrass can expand into new habitat. The optimal decision will vary in different social-ecological contexts, but some basic information can guide optimal investments to counteract land and ocean based stressors: (1) marine restoration should be prioritised if the rates of marine ecosystem decline and expansion are similar and low; (2) marine protection should take precedence if the rate of marine ecosystem decline is high, or if the adjacent catchment is relatively intact and has a low rate of vegetation decline; (3) land-based actions are optimal when the ratio of marine ecosystem expansion to decline is >1.4, with terrestrial restoration typically the most cost effective; and (4) land protection should be prioritised if the catchment is relatively intact, but the rate of vegetation decline is high. These rules-of-thumb illustrate how cost-effective conservation outcomes for connected land-ocean systems can proceed without complex modelling

    Long-term accumulation and transport of anthropogenic phosphorus in three river basins

    Get PDF
    Global food production depends on phosphorus. Phosphorus is broadly applied as fertilizer, but excess phosphorus contributes to eutrophication of surface water bodies and coastal ecosystems1. Here we present an analysis of phosphorus fluxes in three large river basins, including published data on fertilizer, harvested crops, sewage, food waste and river fluxes2, 3, 4. Our analyses reveal that the magnitude of phosphorus accumulation has varied greatly over the past 30–70 years in mixed agricultural–urban landscapes of the Thames Basin, UK, the Yangtze Basin, China, and the rural Maumee Basin, USA. Fluxes of phosphorus in fertilizer, harvested crops, food waste and sewage dominate over the river fluxes. Since the late 1990s, net exports from the Thames and Maumee Basins have exceeded inputs, suggesting net mobilization of the phosphorus pool accumulated in earlier decades. In contrast, the Yangtze Basin has consistently accumulated phosphorus since 1980. Infrastructure modifications such as sewage treatment and dams may explain more recent declines in total phosphorus fluxes from the Thames and Yangtze Rivers3, 4. We conclude that human-dominated river basins may undergo a prolonged but finite accumulation phase when phosphorus inputs exceed agricultural demand, and this accumulated phosphorus may continue to mobilize long after inputs decline
    corecore