227 research outputs found

    The impact of a post-take ward round pharmacist on the risk score and enactment of medication-related recommendations

    Get PDF
    There is a scarcity of published research describing the impact of a pharmacist on the post-take ward round (PTWR) in addition to ward-based pharmacy services. The aim of this paper was to evaluate the impact of clinical pharmacists' participation on the PTWR on the risk assessment scores of medication-related recommendations with and without a pharmacist. This includes medication-related recommendations occurring on the PTWR and those recommendations made by the ward-based pharmacist on the inpatient ward. A pre-post intervention study was undertaken that compared the impact of adding a pharmacist to the PTWR compared with ward-based pharmacist services alone. A panel reviewed the risk of not acting on medication recommendations that was made on the PTWR and those recorded by the ward-based pharmacist. The relationship between the risk scores and the number and proportion of recommendations that led to action were compared between study groups. There were more medication-related recommendations on the PTWR in the intervention group when a pharmacist was present. Proportionately fewer were in the 'very high and extreme' risk category. Although there was no difference in the number of ward pharmacist recommendations between groups, there was a significantly higher proportion of ward pharmacist recommendations in the "very high and extreme" category in those patients who had been seen on a PTWR attended by a pharmacist than when a pharmacist was not present. There were a greater proportion of "low and medium" risk actionable medication recommendations actioned on the PTWR in the intervention group; and no difference in the risk scores in ward pharmacist recommendations actioned between groups. Overall, the proportion of recommendations that were actioned was higher for those made on the PTWR compared with the ward. The addition of a pharmacist to the PTWR resulted in an increase in low, medium, and high risk recommendations on the PTWR, more very high and extreme risk recommendations made by the ward-based pharmacist, plus an increased number of recommendations being actioned during the patients' admission

    Aboriginal Capacity Building Achievements for Sustainable Natural Resource Development

    Get PDF
    This knowledge synthesis report provides the first bibliometric profile and systemic review of research on Aboriginal capacity building for natural resource development in Canada.This report was supported by the Social Sciences and Humanities Research Council of Canada and The University of Winnipeg

    Toute une montagne d’affaires louches

    Get PDF

    Intracranial bleeding in patients with traumatic brain injury: A prognostic study

    Get PDF
    BACKGROUND: Intracranial bleeding (IB) is a common and serious consequence of traumatic brain injury (TBI). IB can be classified according to the location into: epidural haemorrhage (EDH) subdural haemorrhage (SDH) intraparenchymal haemorrhage (IPH) and subarachnoid haemorrhage (SAH). Studies involving repeated CT scanning of TBI patients have found that IB can develop or expand in the 48 hours after injury. If IB enlarges after hospital admission and larger bleeds have a worse prognosis, this would provide a therapeutic rationale for treatments to prevent increase in the extent of bleeding. We analysed data from the Trauma Audit & Research Network (TARN), a large European trauma registry, to evaluate the association between the size of IB and mortality in patients with TBI. METHODS: We analysed 13,962 patients presenting to TARN participating hospitals between 2001 and 2008 with a Glasgow Coma Score (GCS) less than 15 at presentation or any head injury with Abbreviated Injury Scale (AIS) severity code 3 and above. The extent of intracranial bleeding was determined by the AIS code. Potential confounders were age, presenting Glasgow Coma Score, mechanism of injury, presence and nature of other brain injuries, and presence of extra-cranial injuries. The outcomes were in-hospital mortality and haematoma evacuation. We conducted a multivariable logistic regression analysis to evaluate the independent effect of large and small size of IB, in comparison with no bleeding, on patient outcomes. We also conducted a multivariable logistic regression analysis to assess the independent effect on mortality of large IB in comparison with small IB. RESULTS: Almost 46% of patients had at some type of IB. Subdural haemorrhages were present in 30% of the patients, with epidural and intraparenchymal present in approximately 22% each. After adjusting for potential confounders, we found that large IB, wherever located, was associated with increased mortality in comparison with no bleeding. We also found that large IB was associated with an increased risk of mortality in comparison with small IB. The odds ratio for mortality for large SDH, IPH and EDH, in comparison with small bleeds, were: 3.41 (95% CI: 2.684.33), 3.47 (95% CI: 2.265.33) and 2.86 (95% CI: 1.864.38) respectively. CONCLUSION: Large EDH, SDH and IPH are associated with a substantially higher probability of hospital mortality in comparison with small IB. However, the limitations of our data, such as the large proportion of missing data and lack of data on other confounding factors, such as localization of the bleeding, make the results of this report only explanatory. Future studies should also evaluate the effect of IB size on functional outcomes

    Use of the Oxford Handicap Scale at hospital discharge to predict Glasgow Outcome Scale at 6 months in patients with traumatic brain injury

    Get PDF
    BACKGROUND: Traumatic brain injury (TBI) is an important cause of acquired disability. In evaluating the effectiveness of clinical interventions for TBI it is important to measure disability accurately. The Glasgow Outcome Scale (GOS) is the most widely used outcome measure in randomised controlled trials (RCTs) in TBI patients. However GOS measurement is generally collected at 6 months after discharge when loss to follow up could have occurred. The objectives of this study were to evaluate the association and predictive validity between a simple disability scale at hospital discharge, the Oxford Handicap Scale (OHS), and the GOS at 6 months among TBI patients. METHODS: The study was a secondary analysis of a randomised clinical trial among TBI patients (MRC CRASH Trial). A Spearman correlation was estimated to evaluate the association between the OHS and GOS. The validity of different dichotomies of the OHS for predicting GOS at 6 months was assessed by calculating sensitivity, specificity and the C statistic. Uni and multivariate logistic regression models were fitted including OHS as explanatory variable. For each model we analysed its discrimination and calibration. RESULTS: We found that the OHS is highly correlated with GOS at 6 months (spearman correlation 0.75) with evidence of a linear relationship between the two scales. The OHS dichotomy that separates patients with severe dependency or death showed the greatest discrimination (C statistic: 84.3). Among survivors at hospital discharge the OHS showed a very good discrimination (C statistic 0.78) and excellent calibration when used to predict GOS outcome at 6 months. CONCLUSION: We have shown that the OHS, a simple disability scale available at hospital discharge can predict disability accurately, according to the GOS, at 6 months. OHS could be used to improve the design and analysis of clinical trials in TBI patients and may also provide a valuable clinical tool for physicians to improve communication with patients and relatives when assessing a patient's prognosis at hospital discharge

    Developing and enhancing biodiversity monitoring programmes: a collaborative assessment of priorities

    Get PDF
    1.Biodiversity is changing at unprecedented rates, and it is increasingly important that these changes are quantified through monitoring programmes. Previous recommendations for developing or enhancing these programmes focus either on the end goals, that is the intended use of the data, or on how these goals are achieved, for example through volunteer involvement in citizen science, but not both. These recommendations are rarely prioritized. 2.We used a collaborative approach, involving 52 experts in biodiversity monitoring in the UK, to develop a list of attributes of relevance to any biodiversity monitoring programme and to order these attributes by their priority. We also ranked the attributes according to their importance in monitoring biodiversity in the UK. Experts involved included data users, funders, programme organizers and participants in data collection. They covered expertise in a wide range of taxa. 3.We developed a final list of 25 attributes of biodiversity monitoring schemes, ordered from the most elemental (those essential for monitoring schemes; e.g. articulate the objectives and gain sufficient participants) to the most aspirational (e.g. electronic data capture in the field, reporting change annually). This ordered list is a practical framework which can be used to support the development of monitoring programmes. 4.People's ranking of attributes revealed a difference between those who considered attributes with benefits to end users to be most important (e.g. people from governmental organizations) and those who considered attributes with greatest benefit to participants to be most important (e.g. people involved with volunteer biological recording schemes). This reveals a distinction between focussing on aims and the pragmatism in achieving those aims. 5.Synthesis and applications. The ordered list of attributes developed in this study will assist in prioritizing resources to develop biodiversity monitoring programmes (including citizen science). The potential conflict between end users of data and participants in data collection that we discovered should be addressed by involving the diversity of stakeholders at all stages of programme development. This will maximize the chance of successfully achieving the goals of biodiversity monitoring programmes

    Historic landscape management: a validation of quantitative soil thin-section analyses

    Get PDF
    The archaeological interpretation of past land management practices can be greatly enhanced through examination of soil thin sections. Features relating to manuring practice are among those key to interpreting agricultural practices. The sources and the pro¬cesses leading to the distribution of these manure materials may further improve knowledge of the past landscape utilisation. The use of quantitative analyses to examine soil thin sections opens the possibility of considering these relationships between manured areas in greater detail and to extract more subtle spatial and temporal changes in past management. In this study the validation of this methodology has been tested with quantitative image analysis methods used to examine manure inputs to a well-documented historical landscape of Papa Stour, Shetland, where intensive manuring has been practised until the 1960s. By using both historic and ethnographic evidence to validate the image analysis protocol, differences in spatial and temporal distribution are examined for the practices of manuring with both fuel residues and with turf. The validation of the hypotheses expected from ethnographic and historical data that quantitative soils-based evidence allows the definition of variations in manuring strategies and provides a more secure basis from which to interpret manuring management strategies in archaeological landscapes

    Cold sintering of bioglass and bioglass/polymer composites

    Get PDF
    Bioactive glasses are widely utilized to regenerate bone tissue and aid bonding of orthopedic implants. Forming composites of bioglass with bioactive polymers allow the mechanical properties and biological response to be tailored. Although several methods for creating bioglass–polymer composites exist, they require dissolution of the polymer, controlled phase separation, and appear to have an upper limit of ∼30 vol.% bioglass. Cold sintering is a novel technique for the densification of ceramics and glasses which utilizes a liquid phase and pressure to allow the production of components at reduced temperatures. We demonstrate that cold sintering (100°C) of Bioglass 45S5 powder produced via flame spray pyrolysis and the fabrication of Bioglass 45S5–polymer composites. Assessment of the in vitro response revealed that composites were not cytotoxic. Solid-state 31P and 29Si MAS NMR studies of the silicon and phosphorus speciation in the glass powder, as-received, wetted, and sintered samples show similarities to reactions expected when bioglass is implanted in the body which along with Raman spectroscopy data gave insight into the cold sintering densification mechanism

    Enriching the shared socioeconomic pathways to co-create consistent multi-sector scenarios for the UK

    Get PDF
    As the pressure to take action against global warming is growing in urgency, scenarios that incorporate multiple social, economic and environmental drivers become increasingly critical to support governments and other stakeholders in planning climate change mitigation or adaptation actions. This has led to the recent explosion of future scenario analyses at multiple scales, further accelerated since the development of the Intergovernmental Panel on Climate Change (IPCC) research community Shared Socioeconomic Pathways (SSPs) and Representative Concentration Pathways (RCPs). While RCPs have been widely applied to climate models to produce climate scenarios at multiple scales for investigating climate change impacts, adaptation and vulnerabilities (CCIAV), SSPs are only recently being scaled for different geographical and sectoral applications. This is seen in the UK where significant investment has produced the RCP-based UK Climate Projections (UKCP18), but no equivalent UK version of the SSPs exists. We address this need by developing a set of multi-driver qualitative and quantitative UK-SSPs, following a state-of-the-art scenario methodology that integrates national stakeholder knowledge on locally-relevant drivers and indicators with higher level information from European and global SSPs. This was achieved through an intensive participatory process that facilitated the combination of bottom-up and top-down approaches to develop a set of UK-specific SSPs that are locally comprehensive, yet consistent with the global and European SSPs. The resulting scenarios balance the importance of consistency and legitimacy, demonstrating that divergence is not necessarily the result of inconsistency, nor comes as a choice to contextualise narratives at the appropriate scale

    An evaluation of pharmacology curricula in Australian science and health-related degree programs

    Get PDF
    Background: Pharmacology is a biomedical discipline taught in basic science and professional degree programs. In order to provide information that would facilitate pharmacology curricula to be refined and developed, and approaches to teaching to be updated, a national survey was undertaken in Australia that investigated pharmacology course content, teaching and summative assessment methods. Methods: Twenty-two institutions participated in a purpose-built online questionnaire, which enabled an evaluation of 147 courses taught in 10 different degrees. To enable comparison, degrees were grouped into four major degree programs, namely science, pharmacy, medicine and nursing. The pharmacology content was then classified into 16 lecture themes, with 2-21 lecture topics identified per theme. The resultant data were analysed for similarities and differences in pharmacology curricula across the degree programs. Results: While all lecture themes were taught across degree programs, curriculum content differed with respect to the breadth and hours of coverage. Overall, lecture themes were taught most broadly in medicine and with greatest coverage in pharmacy. Reflecting a more traditional approach, lectures were a dominant teaching method (at least 90% of courses). Sixty-three percent of science courses provided practical classes but such sessions occurred much less frequently in other degree programs, while tutorials were much more common in pharmacy degree programs (70%). Notably, problem-based learning was common across medical programs. Considerable diversity was found in the types of summative assessment tasks employed. In science courses the most common form of in-semester assessment was practical reports, whereas in other programs pen-and-paper quizzes predominated. End-of-semester assessment contributed 50-80% to overall assessment across degree programs. Conclusion: The similarity in lecture themes taught across the four different degree programs shows that common knowledge- and competency-based learning outcomes can be defined for pharmacology. The authors contend that it is the differences in breadth and coverage of material for each lecture theme, and the differing teaching modes and assessment that characterise particular degree programs. Adoption of pharmacology knowledge-based learning outcomes that could be tailored to suit individual degree programs would better facilitate the sharing of expertise and teaching practice than the current model where pharmacology curricula are degree-specific.Hilary Lloyd, Tina Hinton, Shane Bullock, Anna-Marie Babey, Elizabeth Davis, Lynette Fernandes, Joanne Hart, Ian Musgrave and James Zioga
    corecore