688 research outputs found
The challenges of implementing packaged hospital electronic prescribing and medicine administration systems in UK hospitals: premature purchase of immature solutions?
The UK National Health Service is making major efforts to implement Hospital Electronic Prescribing and Medicine Administration (HEPMA) to improve patient safety and quality of care. Substantial public investments have attracted a wide range of UK and overseas suppliers offering Commercial-Off βThe-Shelf (COTS) solutions. A lack of (UK) implementation experience and weak supplier-user relationships are reflected in systems with limited configurability, poorly matched to the needs and practices of English hospitals. This situation echoes the history of comparable corporate information infrastructures - Enterprise Resource Planning systems - in the 1980s/1990s. UK government intervention prompted a similar swarming of immature, often unfinished, products into the market. This resulted, in both cases, in protracted and difficult implementation processes as vendors and adopters struggled to get the systems to work and match the circumstances of the adopting organisations. An analysis of the influence of the Installed Base on Information Infrastructures should explore how the evolution of COTS solutions is conditioned by the structure of adopter and vendor βcommunitiesβ
Investigating and learning lessons from early experiences of implementing ePrescribing systems into NHS hospitals:a questionnaire study
Background: ePrescribing systems have significant potential to improve the safety and efficiency of healthcare, but they need to be carefully selected and implemented to maximise benefits. Implementations in English hospitals are in the early stages and there is a lack of standards guiding the procurement, functional specifications, and expected benefits. We sought to provide an updated overview of the current picture in relation to implementation of ePrescribing systems, explore existing strategies, and identify early lessons learned.Methods: a descriptive questionnaire-based study, which included closed and free text questions and involved both quantitative and qualitative analysis of the data generated.Results: we obtained responses from 85 of 108 NHS staff (78.7% response rate). At least 6% (n = 10) of the 168 English NHS Trusts have already implemented ePrescribing systems, 2% (n = 4) have no plans of implementing, and 34% (n = 55) are planning to implement with intended rapid implementation timelines driven by high expectations surrounding improved safety and efficiency of care. The majority are unclear as to which system to choose, but integration with existing systems and sophisticated decision support functionality are important decisive factors. Participants highlighted the need for increased guidance in relation to implementation strategy, system choice and standards, as well as the need for top-level management support to adequately resource the project. Although some early benefits were reported by hospitals that had already implemented, the hoped for benefits relating to improved efficiency and cost-savings remain elusive due to a lack of system maturity.Conclusions: whilst few have begun implementation, there is considerable interest in ePrescribing systems with ambitious timelines amongst those hospitals that are planning implementations. In order to ensure maximum chances of realising benefits, there is a need for increased guidance in relation to implementation strategy, system choice and standards, as well as increased financial resources to fund local activitie
Identification of undiagnosed atrial fibrillation patients using a machine learning risk prediction algorithm and diagnostic testing (PULsE-AI): Study protocol for a randomised controlled trial.
Atrial fibrillation (AF) is associated with an increased risk of stroke, enhanced stroke severity, and other comorbidities. However, AF is often asymptomatic, and frequently remains undiagnosed until complications occur. Current screening approaches for AF lack either cost-effectiveness or diagnostic sensitivity; thus, there is interest in tools that could be used for population screening. An AF risk prediction algorithm, developed using machine learning from a UK dataset of 2,994,837 patients, was found to be more effective than existing models at identifying patients at risk of AF. Therefore, the aim of the trial is to assess the effectiveness of this risk prediction algorithm combined with diagnostic testing for the identification of AF in a real-world primary care setting. Eligible participants (aged β₯30β―years and without an existing AF diagnosis) registered at participating UK general practices will be randomised into intervention and control arms. Intervention arm participants identified at highest risk of developing AF (algorithm risk scoreβ―β₯β―7.4%) will be invited for a 12βlead electrocardiogram (ECG) followed by two-weeks of home-based ECG monitoring with a KardiaMobile device. Control arm participants will be used for comparison and will be managed routinely. The primary outcome is the number of AF diagnoses in the intervention arm compared with the control arm during the research window. If the trial is successful, there is potential for the risk prediction algorithm to be implemented throughout primary care for narrowing the population considered at highest risk for AF who could benefit from more intensive screening for AF. Trial Registration: NCT04045639
A Serological Survey of Infectious Disease in Yellowstone National Parkβs Canid Community
BACKGROUND:Gray wolves (Canis lupus) were reintroduced into Yellowstone National Park (YNP) after a >70 year absence, and as part of recovery efforts, the population has been closely monitored. In 1999 and 2005, pup survival was significantly reduced, suggestive of disease outbreaks. METHODOLOGY/PRINCIPAL FINDINGS:We analyzed sympatric wolf, coyote (Canis latrans), and red fox (Vulpes vulpes) serologic data from YNP, spanning 1991-2007, to identify long-term patterns of pathogen exposure, identify associated risk factors, and examine evidence for disease-induced mortality among wolves for which there were survival data. We found high, constant exposure to canine parvovirus (wolf seroprevalence: 100%; coyote: 94%), canine adenovirus-1 (wolf pups [0.5-0.9 yr]: 91%, adults [>or=1 yr]: 96%; coyote juveniles [0.5-1.5 yrs]: 18%, adults [>or=1.6 yrs]: 83%), and canine herpesvirus (wolf: 87%; coyote juveniles: 23%, young adults [1.6-4.9 yrs]: 51%, old adults [>or=5 yrs]: 87%) suggesting that these pathogens were enzootic within YNP wolves and coyotes. An average of 50% of wolves exhibited exposure to the protozoan parasite, Neospora caninum, although individuals' odds of exposure tended to increase with age and was temporally variable. Wolf, coyote, and fox exposure to canine distemper virus (CDV) was temporally variable, with evidence for distinct multi-host outbreaks in 1999 and 2005, and perhaps a smaller, isolated outbreak among wolves in the interior of YNP in 2002. The years of high wolf-pup mortality in 1999 and 2005 in the northern region of the park were correlated with peaks in CDV seroprevalence, suggesting that CDV contributed to the observed mortality. CONCLUSIONS/SIGNIFICANCE:Of the pathogens we examined, none appear to jeopardize the long-term population of canids in YNP. However, CDV appears capable of causing short-term population declines. Additional information on how and where CDV is maintained and the frequency with which future epizootics might be expected might be useful for future management of the Northern Rocky Mountain wolf population
Rationale and design of the oral HEMe iron polypeptide Against Treatment with Oral Controlled Release Iron Tablets trial for the correction of anaemia in peritoneal dialysis patients (HEMATOCRIT trial)
Background: The main hypothesis of this study is that oral heme iron polypeptide (HIP; Proferrin (R) ES) administration will more effectively augment iron stores in erythropoietic stimulatory agent (ESA)-treated peritoneal dialysis (PD) patients than conventional oral iron supplementation (Ferrogradumet (R))
Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy
Background
A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets.
Methods
Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendallβs tau for dichotomous variables, or JonckheereβTerpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis.
Results
A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both pβ<β0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROCβ=β0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all pβ<β0.001).
Conclusion
We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
Recommended from our members
Undertaking multi-centre randomised controlled trials in primary care: learnings and recommendations from the PULsE-AI trial researchers.
BACKGROUND: Conducting effective and translational research can be challenging and few trials undertake formal reflection exercises and disseminate learnings from them. Following completion of our multicentre randomised controlled trial, which was impacted by the COVID-19 pandemic, we sought to reflect on our experiences and share our thoughts on challenges, lessons learned, and recommendations for researchers undertaking or considering research in primary care. METHODS: Researchers involved in the Prediction of Undiagnosed atriaL fibrillation using a machinE learning AlgorIthm (PULsE-AI) trial, conducted in England from June 2019 to February 2021 were invited to participate in a qualitative reflection exercise. Members of the Trial Steering Committee (TSC) were invited to attend a semi-structured focus group session, Principal Investigators and their research teams at practices involved in the trial were invited to participate in a semi-structured interview. Following transcription, reflexive thematic analysis was undertaken based on pre-specified themes of recruitment, challenges, lessons learned, and recommendations that formed the structure of the focus group/interview sessions, whilst also allowing the exploration of new themes that emerged from the data. RESULTS: Eight of 14 members of the TSC, and one of six practices involved in the trial participated in the reflection exercise. Recruitment was highlighted as a major challenge encountered by trial researchers, even prior to disruption due to the COVID-19 pandemic. Researchers also commented on themes such as the need to consider incentivisation, and challenges associated with using technology in trials, especially in older age groups. CONCLUSIONS: Undertaking a formal reflection exercise following the completion of the PULsE-AI trial enabled us to review experiences encountered whilst undertaking a prospective randomised trial in primary care. In sharing our learnings, we hope to support other clinicians undertaking research in primary care to ensure that future trials are of optimal value for furthering knowledge, streamlining pathways, and benefitting patients
A Synthesis of Tagging Studies Examining the Behaviour and Survival of Anadromous Salmonids in Marine Environments
This paper synthesizes tagging studies to highlight the current state of knowledge concerning the behaviour and survival of anadromous salmonids in the marine environment. Scientific literature was reviewed to quantify the number and type of studies that have investigated behaviour and survival of anadromous forms of Pacific salmon (Oncorhynchus spp.), Atlantic salmon (Salmo salar), brown trout (Salmo trutta), steelhead (Oncorhynchus mykiss), and cutthroat trout (Oncorhynchus clarkii). We examined three categories of tags including electronic (e.g. acoustic, radio, archival), passive (e.g. external marks, Carlin, coded wire, passive integrated transponder [PIT]), and biological (e.g. otolith, genetic, scale, parasites). Based on 207 papers, survival rates and behaviour in marine environments were found to be extremely variable spatially and temporally, with some of the most influential factors being temperature, population, physiological state, and fish size. Salmonids at all life stages were consistently found to swim at an average speed of approximately one body length per second, which likely corresponds with the speed at which transport costs are minimal. We found that there is relatively little research conducted on open-ocean migrating salmonids, and some species (e.g. masu [O. masou] and amago [O. rhodurus]) are underrepresented in the literature. The most common forms of tagging used across life stages were various forms of external tags, coded wire tags, and acoustic tags, however, the majority of studies did not measure tagging/handling effects on the fish, tag loss/failure, or tag detection probabilities when estimating survival. Through the interdisciplinary application of existing and novel technologies, future research examining the behaviour and survival of anadromous salmonids could incorporate important drivers such as oceanography, tagging/handling effects, predation, and physiology
Recommended from our members
Identification of undiagnosed atrial fibrillation using a machine learning risk prediction algorithm and diagnostic testing (PULsE-AI) in primary care: cost-effectiveness of a screening strategy evaluated in a randomised controlled trial in England.
OBJECTIVE: The PULsE-AI trial sought to determine the effectiveness of a screening strategy that included a machine learning risk prediction algorithm in conjunction with diagnostic testing for identification of undiagnosed atrial fibrillation (AF) in primary care. This study aimed to evaluate the cost-effectiveness of implementing the screening strategy in a real-world setting. METHODS: Data from the PULsE-AI trial - a prospective, randomized, controlled trial conducted across six general practices in England from June 2019 to February 2021 - were used to inform a cost-effectiveness analysis that included a hybrid screening decision tree and Markov AF disease progression model. Model outcomes were reported at both individual- and population-level (estimated UK population β₯30 years of age at high-risk of undiagnosed AF) and included number of patients screened, number of AF cases identified, mean total and incremental costs (screening, events, treatment), quality-adjusted-life-years (QALYs), and incremental cost-effectiveness ratio (ICER). RESULTS: The screening strategy was estimated to result in 45,493 new diagnoses of AF across the high-risk population in the UK (3.3 million), and an estimated additional 14,004 lifetime diagnoses compared with routine care only. Per-patient costs for high-risk individuals who underwent the screening strategy were estimated at Β£1,985 (vs Β£1,888 for individuals receiving routine care only). At a population-level, the screening strategy was associated with a cost increase of approximately Β£322 million and an increase of 81,000 QALYs. The screening strategy demonstrated cost-effectiveness versus routine care only at an accepted ICER threshold of Β£20,000 per QALY-gained, with an ICER of Β£3,994/QALY. CONCLUSIONS: Compared with routine care only, it is cost-effective to target individuals at high risk of undiagnosed AF, through an AF risk prediction algorithm, who should then undergo diagnostic testing. This AF risk prediction algorithm can reduce the number of patients needed to be screened to identify undiagnosed AF, thus alleviating primary care burden
- β¦