25 research outputs found

    Can early warning scores identify deteriorating patients in pre-hospital settings? A systematic review

    Get PDF
    OBJECTIVE: To evaluate the effectiveness and predictive accuracy of early warning scores (EWS) to predict deteriorating patients in pre-hospital settings. METHODS: Systematic review. Seven databases searched to August 2017. Study quality was assessed using QUADAS-2. A narrative synthesis is presented. ELIGIBILITY: Studies that evaluated EWS predictive accuracy or that compared outcomes in populations that did or did not use EWS, in any pre-hospital setting were eligible for inclusion. EWS were included if they aggregated three or more physiological parameters. RESULTS: Seventeen studies (157,878 participants) of predictive accuracy were included (16 in ambulance service and 1 in nursing home). AUCs ranged from 0.50 (CI not reported) to 0.89 (95%CI 0.82, 0.96). AUCs were generally higher (>0.80) for prediction of mortality within short time frames or for combination outcomes that included mortality and ICU admission. Few patients with low scores died at any time point. Patients with high scores were at risk of deterioration. Results were less clear for intermediate thresholds (≥4 or 5). Five studies were judged at low or unclear risk of bias, all others were judged at high risk of bias. CONCLUSIONS: Very low and high EWS are able to discriminate between patients who are not likely and those who are likely to deteriorate in the pre-hospital setting. No study compared outcomes pre- and post-implementation of EWS so there is no evidence on whether patient outcomes differ between pre-hospital settings that do and do not use EWS. Further studies are required to address this question and to evaluate EWS in pre-hospital settings

    Long COVID in children and young after infection or reinfection with the Omicron variant: a prospective observational study

    Get PDF
    To describe the prevalence of long COVID in children infected for the first time (n = 332) or reinfected (n = 243) with Omicron compared with test-negative children (n = 311). Overall, 12%-16% of those infected with Omicron met the research definition of long COVID at 3 and 6 months after infection, with no evidence of difference between cases of first positive and reinfected (Pχ2 = 0.17)

    Associations between attainment of incentivised primary care indicators and incident diabetic retinopathy in England: a population-based historical cohort study

    Get PDF
    Background The associations between England’s incentivised primary care-based diabetes prevention activities and hard clinical endpoints remain unclear. We aimed to examine the associations between attainment of primary care indicators and incident diabetic retinopathy (DR) among people with type 2 diabetes. Methods A historical cohort (n = 60,094) of people aged ≥ 18 years with type 2 diabetes and no DR at baseline was obtained from the UK Clinical Practice Research Datalink (CPRD). Exposures included attainment of the Quality and Outcomes Framework (QOF) HbA1c (≤ 7.5% or 59 mmol/mol), blood pressure (≤ 140/80 mmHg), and cholesterol (≤ 5 mmol/L) indicators, and number of National Diabetes Audit (NDA) care processes completed (categorised as 0–3, 4–6, or 7–9), in 2010–2011. Outcomes were time to development of DR and sight-threatening diabetic retinopathy (STDR). Nearest neighbour propensity score matching was undertaken and Cox proportional hazards models then fitted using the matched samples. Concordance statistics were calculated for each model. Results 8263 DR and 832 STDR diagnoses were observed over mean follow-up periods of 3.5 (SD 2.1) and 3.8 (SD 2.0) years, respectively. HbA1c and blood pressure (BP) indicator attainment were associated with lower rates of DR (adjusted hazard ratios (aHRs) 0.94 (95% CI 0.89–0.99) and 0.87 (0.83–0.92), respectively), whereas cholesterol indicator attainment was not (aHR 1.03 (0.97–1.10)). All QOF indicators were associated with lower rates of STDR (aHRs 0.74 (0.62–0.87) for HbA1c, 0.78 (0.67–0.91) for BP, and 0.82 (0.67–0.99) for cholesterol). Completion of 7–9 vs. 0–3 NDA processes was associated with fewer STDR diagnoses (aHR 0.72 (0.55–0.94)). Conclusions Attainment of key primary care indicators is associated with lower incidence of DR and STDR among patients with type 2 diabetes in England

    Systematic review of interventions for the prevention and treatment of postoperative urinary retention

    Get PDF
    Background: Postoperative urinary retention (PO‐UR) is an acute and painful inability to void after surgery that can lead to complications and delayed hospital discharge. Standard treatment with a urinary catheter is associated with a risk of infection and can be distressing, undignified and uncomfortable. This systematic review aimed to identify effective interventions for the prevention and treatment of PO‐UR that might be alternatives to urinary catheterization. Methods: Electronic databases were searched from inception to September 2017. Randomized trials of interventions for the prevention or treatment of PO‐UR were eligible for inclusion. Studies were assessed for risk of bias using the Cochrane (2.0) tool. Two reviewers were involved at all review stages. Where possible, data were pooled using random‐effects meta‐analysis. The overall quality of the body of evidence was rated using the GRADE approach. Results: Some 48 studies involving 5644 participants were included. Most interventions were pharmacological strategies to prevent PO‐UR. Based on GRADE, there was high‐certainty evidence to support replacing morphine in a regional anaesthetic regimen, using alpha‐blockers (number needed to treat to prevent one case of PO‐UR (NNT) 5, 95 per cent c.i. 5 to 7), the antispasmodic drug drotaverine (NNT 9, 7 to 30) and early postoperative mobilization (NNT 5, 4 to 8) for prevention, and employing hot packs or gauze soaked in warm water for treatment (NNT 2, 2 to 4). Very few studies reported on secondary outcomes of pain, incidence of urinary tract infection or duration of hospital stay. Conclusion: Promising interventions exist for PO‐UR, but they need to be evaluated in randomized trials investigating comparative clinical and cost effectiveness, and acceptability to patients

    Development and validation of resource-driven risk prediction models for incident chronic kidney disease in type 2 diabetes

    Get PDF
    Prediction models for population-based screening need, for global usage, to be resource-driven, involving predictors that are affordably resourced. Here, we report the development and validation of three resource-driven risk models to identify people with type 2 diabetes (T2DM) at risk of stage 3 CKD defined by a decline in estimated glomerular filtration rate (eGFR) to below 60 mL/min/1.73m2. The observational study cohort used for model development consisted of data from a primary care dataset of 20,510 multi-ethnic individuals with T2DM from London, UK (2007–2018). Discrimination and calibration of the resulting prediction models developed using cox regression were assessed using the c-statistic and calibration slope, respectively. Models were internally validated using tenfold cross-validation and externally validated on 13,346 primary care individuals from Wales, UK. The simplest model was simplified into a risk score to enable implementation in community-based medicine. The derived full model included demographic, laboratory parameters, medication-use, cardiovascular disease history (CVD) and sight threatening retinopathy status (STDR). Two less resource-intense models were developed by excluding CVD and STDR in the second model and HbA1c and HDL in the third model. All three 5-year risk models had good internal discrimination and calibration (optimism adjusted C-statistics were each 0.85 and calibration slopes 0.999–1.002). In Wales, models achieved excellent discrimination(c-statistics ranged 0.82–0.83). Calibration slopes at 5-years suggested models over-predicted risks, however were successfully updated to accommodate reduced incidence of stage 3 CKD in Wales, which improved their alignment with the observed rates in Wales (E/O ratios near to 1). The risk score demonstrated similar model performance compared to direct evaluation of the cox model. These resource-driven risk prediction models may enable universal screening for Stage 3 CKD to enable targeted early optimisation of risk factors for CKD

    Development and validation of predictive risk models for sight threatening diabetic retinopathy in patients with type 2 diabetes to be applied as triage tools in resource limited settings.

    Get PDF
    Background: Delayed diagnosis and treatment of sight threatening diabetic retinopathy (STDR) is a common cause of visual impairment in people with Type 2 diabetes. Therefore, systematic regular retinal screening is recommended, but global coverage of such services is challenging. We aimed to develop and validate predictive models for STDR to identify 'at-risk' population for retinal screening. Methods: Models were developed using datasets obtained from general practices in inner London, United Kingdom (UK) on adults with type 2 Diabetes during the period 2007-2017. Three models were developed using Cox regression and model performance was assessed using C statistic, calibration slope and observed to expected ratio measures. Models were externally validated in cohorts from Wales, UK and India. Findings: A total of 40,334 people were included in the model development phase of which 1427 (3·54%) people developed STDR. Age, gender, diabetes duration, antidiabetic medication history, glycated haemoglobin (HbA1c), and history of retinopathy were included as predictors in the Model 1, Model 2 excluded retinopathy status, and Model 3 further excluded HbA1c. All three models attained strong discrimination performance in the model development dataset with C statistics ranging from 0·778 to 0·832, and in the external validation datasets (C statistic 0·685 - 0·823) with calibration slopes closer to 1 following re-calibration of the baseline survival. Interpretation: We have developed new risk prediction equations to identify those at risk of STDR in people with type 2 diabetes in any resource-setting so that they can be screened and treated early. Future testing, and piloting is required before implementation. Funding: This study was funded by the GCRF UKRI (MR/P207881/1) and supported by the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology

    Abstracts from the NIHR INVOLVE Conference 2017

    Get PDF
    n/

    Need to improve rubber yields

    No full text
    corecore