54 research outputs found

    The long-term impact of vaginal surgical mesh devices in UK primary care: a cohort study in the Clinical Practice Research Datalink

    Get PDF
    Purpose: Stress urinary incontinence (SUI) and pelvic organ prolapse (POP) may be treated with surgical mesh devices; evidence of their long-term complications is lacking. Patients and Methods: Rates of diagnoses of depression, anxiety or self-harm (composite measure) and sexual dysfunction, and rates of prescriptions for antibiotics and opioids were estimated in women with and without mesh surgery, with a diagnostic SUI/POP code, registered in the Clinical Practice Research Datalink (CPRD) gold database. Results: There were 220,544 women eligible for inclusion; 74% (n = 162,687) had SUI, 37% (n = 82,123) had POP, and 11% (n = 24,266) had both. Women undergoing mesh surgery for SUI or POP had about 1.1 times higher rates of antibiotic use. Women with no previous history of the outcome, who underwent mesh surgery had 2.43 (95% CI 2.19– 2.70) and 1.47 (95% CI 1.19– 1.81) times higher rates of depression, anxiety, or self-harm, 1.88 (95% CI 1.50– 2.36) and 1.64 (95% CI 1.02– 2.63) times higher rates of sexual dysfunction and 1.40 (95% CI 1.26– 1.56) and 1.23 (95% CI 1.01– 1.49) times higher opioid use for SUI and POP, respectively. Women with a history of depression, anxiety and self-harm had 0.3 times lower rates of these outcomes with SUI or POP mesh surgery (HR for SUI 0.70 (95% CI 0.67-0.73), HR for POP 0.72 (95% CI 0.65-0.79)). Women with a history of opioid use who had POP mesh surgery had about 0.09 times lower rates (HR 0.91 (95% CI 0.86– 0.96)) of prescriptions. Negative control outcome analyses showed no evidence of an association between asthma consultations and mesh surgery in women with POP, but the rate was 0.09 times lower (HR 0.91 (95% CI 0.87– 0.94)) in women with SUI mesh surgery, suggesting that study results are subject to some residual confounding. Conclusion: Mesh surgery was associated with poor mental and sexual health outcomes, alongside increased opioid and antibiotic use, in women with no history of these outcomes and improved mental health, and lower opioid use, in women with a previous history of these outcomes. Although our results suggest an influence of residual confounding, careful consideration of the benefits and risk of mesh surgery for women with SUI or POP on an individual basis is required

    Use of cognitive and behavioral strategies during a weight loss program: a secondary analysis of the Doctor Referral of Overweight People to Low-Energy Total Diet Replacement Treatment (DROPLET) trial

    Get PDF
    Background: Achieving a sustained energy deficit is essential for weight loss, but the cognitive and behavioral strategies that support this goal are unclear. Objective: The goal of this study was to investigate the number and type of cognitive and behavioral strategies used by participants who were enrolled in a 1-year weight loss trial and to explore associations between strategies and magnitude of weight loss at 3 months and 1 year. Design: The study is a secondary post-hoc exploratory analysis of data collected as part of the Doctor Referral of Overweight People to Low-Energy total diet replacement Treatment (DROPLET), a randomized controlled trial conducted in general practices in England, United Kingdom, between January 2016 and August 2017. Participants/setting: This study involved 164 participants from both intervention and control groups of the DROPLET trial who completed the Oxford Food and Behaviours (OxFAB) questionnaire to assess the use of 115 strategies grouped into 21 domains used to manage their weight. Interventions: Participants were randomized to either a behavioral weight loss program involving 8 weeks total diet replacement (TDR) and 4 weeks of food reintroduction or a program delivered by a medical practice nurse over a 3-month period (usual care [UC]). Main outcome measures: Weight was objectively measured at baseline, 3 months, and 1 year. Cognitive and behavioral strategies used to support weight loss were assessed using the OxFAB questionnaire at 3 months. Statistical analysis performed: Exploratory factor analysis was used to generate data-driven patterns of strategy use, and a linear mixed-effects model was used to examine associations between use of these patterns and weight change. Results: No evidence was found of a difference in the number of strategies (mean difference, 2.41; 95% confidence interval [CI], −0.83, 5.65) or the number of domains used (mean difference, −0.23; 95% CI, −0.69, 0.23) between the TDR group and the UC group. The number of strategies was not associated with weight loss at either 3 months (−0.02 kg; 95% CI, −0.11, 0.06) or 1 year (−0.05 kg; 95% CI, −0.14, 0.02). Similarly, the number of domains used was not associated with weight loss at 3 months (−0.02 kg; 95% CI, −0.53, 0.49) or 1 year (−0.07 kg; 95% CI, −0.60, 0.46). Factor analysis identified four coherent patterns of strategy use, identified as Physical Activity, Motivation, Planned Eating, and Food Purchasing patterns. Greater use of strategies in the Food Purchasing (−2.6 kg; 95% CI, −4.42, −0.71) and Planned Eating patterns (−3.20 kg; 95% CI, −4.94, −1.46) was associated with greater weight loss at 1 year. Conclusions:The number of cognitive and behavioral strategies or domains used does not appear to influence weight loss, but the types of strategy appear of greater importance. Supporting people to adopt strategies linked to planned eating and food purchasing may aid long-term weight loss

    Opportunities for earlier diagnosis of type 1 diabetes in children: A case-control study using routinely collected primary care records.

    Get PDF
    BACKGROUND: The epidemiology of type 1 diabetes mellitus (T1DM) suggests diagnostic delays may contribute to children developing diabetic ketoacidosis at diagnosis. We sought to quantify opportunities for earlier diagnosis of T1DM in primary care. METHODS: A matched case-control study of children (0-16 years) presenting to UK primary care, examining routinely collected primary care consultation types and National Institute for Health and Care Excellence (NICE) warning signs in the 13 weeks before diagnosis. RESULTS: Our primary analysis included 1920 new T1DM cases and 7680 controls. In the week prior to diagnosis more cases than controls had medical record entries (663, 34.5% vs 1014, 13.6%, odds ratio 3.46, 95% CI 3.07-3.89; p<0.0001) and the incidence rate of face-to-face consultations was higher in cases (mean 0.32 vs 0.11, incidence rate ratio 2.90, 2.61-3.21; p<0.0001). The preceding week entries were found in 330 cases and 943 controls (17.2% vs 12.3%, OR 1.49, 1.3-1.7, p<0.0001), but face-to-face consultations were no different (IRR 1.08 (0.9-1.29, p=0.42)). INTERPRETATION: There may be opportunities to reduce time to diagnosis for up to one third of cases, by up to two weeks. Diagnostic opportunities might be maximised by measures that improve access to primary care, and public awareness of T1DM

    Accuracy of monitors used for blood pressure checks in English retail pharmacies::a cross-sectional observational study

    Get PDF
    BACKGROUND: Free blood pressure (BP) checks offered by community pharmacies provide a potentially useful opportunity to diagnose and/or manage hypertension, but the accuracy of the sphygmomanometers in use is currently unknown. AIM: To assess the accuracy of validated automatic BP monitors used for BP checks in a UK retail pharmacy chain. DESIGN AND SETTING: Cross-sectional, observational study in 52 pharmacies from one chain in a range of locations (inner city, suburban, and rural) in central England. METHOD: Monitor accuracy was compared with a calibrated reference device (Omron PA-350), at 50 mmHg intervals across the range 0–300 mmHg (static pressure test), with a difference from the reference monitor of +/− 3 mmHg at any interval considered a failure. The results were analysed by usage rates and length of time in service. RESULTS: Of 61 BP monitors tested, eight (13%) monitors failed (that is, were >3 mmHg from reference), all of which underestimated BP. Monitor failure rate from the reference monitor of +/− 3 mmHg at any testing interval varied by length of time in use (2/38, 5% <18 months; 4/14, 29% >18 months, P = 0.038) and to some extent, but non-significantly, by usage rates (4/22, 18% in monitors used more than once daily; 2/33, 6% in those used less frequently, P = 0.204). CONCLUSION: BP monitors within a pharmacy setting fail at similar rates to those in general practice. Annual calibration checks for blood pressure monitors are needed, even for new monitors, as these data indicate declining performance from 18 months onwards

    Physician support of smoking cessation after diagnosis of lung, bladder, or upper aerodigestive tract cancer

    Get PDF
    PURPOSE Smoking cessation after a diagnosis of lung, bladder, and upper aerodigestive tract cancer appears to improve survival, and support to quit would improve cessation. The aims of this study were to assess how often general practitioners provide active smoking cessation support for these patients and whether physician behavior is influenced by incentive payments. METHODS Using electronic primary care records from the UK Clinical Practice Research Datalink, 12,393 patients with incident cases of cancer diagnosed between 1999 and 2013 were matched 1 to 1 to patients with incident cases of coronary heart disease (CHD) diagnosed during the same time. We assessed differences in the proportion for whom physicians updated smoking status, advised quitting, and prescribed cessation medications, as well as the proportion of patients who stopped smoking within a year of diagnosis. We further examined whether any differences arose because the physicians were offered incentives to address smoking in patients with CHD and not cancer. RESULTS At diagnosis, 32.0% of patients with cancer and 18.2% of patients with CHD smoked tobacco. Patients with cancer were less likely than patients with CHD to have their general practitioners update smoking status (OR = 0.18; 95% CI, 0.17–0.19), advise quitting (OR = 0.38; 95% CI, 0.36–0.40), or prescribe medication (OR = 0.67; 95% CI, 0.63–0.73), and they were less likely to have stopped smoking (OR = 0.76; 95% CI, 0.69–0.84). One year later 61.7% of patients with cancer and 55.4% with CHD who were smoking at diagnosis were still smoking. Introducing incentive payments was associated with more frequent interventions, but not for patients with CHD specifically. CONCLUSIONS General practitioners were less likely to support smoking cessation in patients with cancer than with CHD, and patients with cancer were less likely to stop smoking. This finding is not due to the difference in incentive payments

    Accuracy of blood pressure monitors owned by patients with hypertension (ACCU-RATE study)

    Get PDF
    Background Home blood pressure (BP) monitoring is recommended in guidelines and increasingly popular with patients and health care professionals, but the accuracy of patients’ own monitors in real world use is not known. Aim To assess the accuracy of home BP monitors used by people with hypertension, and investigate factors affecting accuracy. Design and Setting Patients on the hypertension register at seven practices in central England were surveyed to ascertain if they owned a monitor and wanted it tested. Method Monitor accuracy was compared to a calibrated reference device, at 50 mmHg intervals between 0-280/300 mmHg (static pressure test), with a difference from the reference monitor of +/-3 mmHg at any interval considered a failure. Cuff performance was also assessed. Results were analysed by usage rates, length of time in service, make and model, monitor validation status, cost, and any previous testing. Results 251 (76%, 95% CI 71-80%) of 331 tested devices passed all tests (monitors and cuffs) and 86% passed the static pressure test, deficiencies primarily due to overestimation. 40% of testable monitors were unvalidated. Pass rate on the static pressure test was greater in validated monitors (96% [95% CI 94-98%] vs 64% [95% CI 58-69%]), those retailing for over £10, and those in use for less than four years.12% of cuffs failed. Conclusion Patients’ own BP monitor failure rate was similar to that in studies performed in professional settings, though cuff failure was more frequent. Clinicians can be confident of the accuracy of patients’ own BP monitors, if validated and less than five years old.This work represents independent research commissioned by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research funding scheme (RP-PG-1209-10051). The views expressed in this study are those of the authors and not necessarily of the NHS, the NIHR or the Department of Health. RJM was supported by an NIHR Professorship (NIHR-RP-02-12-015) and by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Oxford at Oxford Health NHS Foundation Trust. FDRH is part funded as Director of the National Institute for Health Research (NIHR) School for Primary Care Research (SPCR), Theme Leader of the NIHR Oxford Biomedical Research Centre (BRC), and Director of the NIHR CLAHRC Oxford. JM is an NIHR Senior Investigator. No funding for this study was received from any monitor manufacturer

    Accuracy of blood-pressure monitors owned by patients with hypertension (ACCU-RATE study): a cross-sectional, observational study in central England.

    Get PDF
    BACKGROUND: Home blood-pressure (BP) monitoring is recommended in guidelines and is increasingly popular with patients and health professionals, but the accuracy of patients' own monitors in real-world use is not known. AIM: To assess the accuracy of home BP monitors used by people with hypertension, and to investigate factors affecting accuracy. DESIGN AND SETTING: Cross-sectional, observational study in urban and suburban settings in central England. METHOD: Patients (n = 6891) on the hypertension register at seven practices in the West Midlands, England, were surveyed to ascertain whether they owned a BP monitor and wanted it tested. Monitor accuracy was compared with a calibrated reference device at 50 mmHg intervals between 0-280/300 mmHg (static pressure test); a difference from the reference monitor of +/-3 mmHg at any interval was considered a failure. Cuff performance was also assessed. Results were analysed by frequency of use, length of time in service, make and model, monitor validation status, purchase price, and any previous testing. RESULTS: In total, 251 (76%, 95% confidence interval [95% CI] = 71 to 80%) of 331 tested devices passed all tests (monitors and cuffs), and 86% (CI] = 82 to 90%) passed the static pressure test; deficiencies were, primarily, because of monitors overestimating BP. A total of 40% of testable monitors were not validated. The pass rate on the static pressure test was greater in validated monitors (96%, 95% CI = 94 to 98%) versus unvalidated monitors (64%, 95% CI = 58 to 69%), those retailing for >£10 (90%, 95% CI = 86 to 94%), those retailing for ≤£10 (66%, 95% CI = 51 to 80%), those in use for ≤4 years (95%, 95% CI = 91 to 98%), and those in use for >4 years (74%, 95% CI = 67 to 82%). All in all, 12% of cuffs failed. CONCLUSION: Patients' own BP monitor failure rate was similar to that demonstrated in studies performed in professional settings, although cuff failure was more frequent. Clinicians can be confident of the accuracy of patients' own BP monitors if the devices are validated and ≤4 years old

    The association between antihypertensive treatment and serious adverse events by age and frailty: A cohort study

    Get PDF
    BACKGROUND: Antihypertensives are effective at reducing the risk of cardiovascular disease, but limited data exist quantifying their association with serious adverse events, particularly in older people with frailty. This study aimed to examine this association using nationally representative electronic health record data. METHODS AND FINDINGS: This was a retrospective cohort study utilising linked data from 1,256 general practices across England held within the Clinical Practice Research Datalink between 1998 and 2018. Included patients were aged 40+ years, with a systolic blood pressure reading between 130 and 179 mm Hg, and not previously prescribed antihypertensive treatment. The main exposure was defined as a first prescription of antihypertensive treatment. The primary outcome was hospitalisation or death within 10 years from falls. Secondary outcomes were hypotension, syncope, fractures, acute kidney injury, electrolyte abnormalities, and primary care attendance with gout. The association between treatment and these serious adverse events was examined by Cox regression adjusted for propensity score. This propensity score was generated from a multivariable logistic regression model with patient characteristics, medical history and medication prescriptions as covariates, and new antihypertensive treatment as the outcome. Subgroup analyses were undertaken by age and frailty. Of 3,834,056 patients followed for a median of 7.1 years, 484,187 (12.6%) were prescribed new antihypertensive treatment in the 12 months before the index date (baseline). Antihypertensives were associated with an increased risk of hospitalisation or death from falls (adjusted hazard ratio [aHR] 1.23, 95% confidence interval (CI) 1.21 to 1.26), hypotension (aHR 1.32, 95% CI 1.29 to 1.35), syncope (aHR 1.20, 95% CI 1.17 to 1.22), acute kidney injury (aHR 1.44, 95% CI 1.41 to 1.47), electrolyte abnormalities (aHR 1.45, 95% CI 1.43 to 1.48), and primary care attendance with gout (aHR 1.35, 95% CI 1.32 to 1.37). The absolute risk of serious adverse events with treatment was very low, with 6 fall events per 10,000 patients treated per year. In older patients (80 to 89 years) and those with severe frailty, this absolute risk was increased, with 61 and 84 fall events per 10,000 patients treated per year (respectively). Findings were consistent in sensitivity analyses using different approaches to address confounding and taking into account the competing risk of death. A strength of this analysis is that it provides evidence regarding the association between antihypertensive treatment and serious adverse events, in a population of patients more representative than those enrolled in previous randomised controlled trials. Although treatment effect estimates fell within the 95% CIs of those from such trials, these analyses were observational in nature and so bias from unmeasured confounding cannot be ruled out. CONCLUSIONS: Antihypertensive treatment was associated with serious adverse events. Overall, the absolute risk of this harm was low, with the exception of older patients and those with moderate to severe frailty, where the risks were similar to the likelihood of benefit from treatment. In these populations, physicians may want to consider alternative approaches to management of blood pressure and refrain from prescribing new treatment

    Host Biomarkers Reflect Prognosis in Patients Presenting With Moderate Coronavirus Disease 2019: A Prospective Cohort Study

    Get PDF
    Efficient resource allocation is essential for effective pandemic response. We measured host biomarkers in 420 patients presenting with moderate coronavirus disease 2019 and found that different biomarkers predict distinct clinical outcomes. Interleukin (IL)-1ra, IL-6, IL-10, and IL-8 exhibit dose-response relationships with subsequent disease progression and could potentially be useful for multiple use-cases

    Predicting the risk of acute kidney injury in primary care: derivation and validation of STRATIFY-AKI

    Get PDF
    BACKGROUND: Antihypertensives reduce the risk of cardiovascular disease but are also associated with harms including acute kidney injury (AKI). Few data exist to guide clinical decision making regarding these risks. AIM: To develop a prediction model estimating the risk of AKI in people potentially indicated for antihypertensive treatment. DESIGN AND SETTING: Observational cohort study using routine primary care data from the Clinical Practice Research Datalink (CPRD) in England. METHOD: People aged ≥40 years, with at least one blood pressure measurement between 130 mmHg and 179 mmHg were included. Outcomes were admission to hospital or death with AKI within 1, 5, and 10 years. The model was derived with data from CPRD GOLD (n = 1 772 618), using a Fine-Gray competing risks approach, with subsequent recalibration using pseudo-values. External validation used data from CPRD Aurum (n = 3 805 322). RESULTS: The mean age of participants was 59.4 years and 52% were female. The final model consisted of 27 predictors and showed good discrimination at 1, 5, and 10 years (C-statistic for 10-year risk 0.821, 95% confidence interval [CI] = 0.818 to 0.823). There was some overprediction at the highest predicted probabilities (ratio of observed to expected event probability for 10-year risk 0.633, 95% CI = 0.621 to 0.645), affecting patients with the highest risk. Most patients (>95%) had a low 1- to 5-year risk of AKI, and at 10 years only 0.1% of the population had a high AKI and low CVD risk. CONCLUSION: This clinical prediction model enables GPs to accurately identify patients at high risk of AKI, which will aid treatment decisions. As the vast majority of patients were at low risk, such a model may provide useful reassurance that most antihypertensive treatment is safe and appropriate while flagging the few for whom this is not the case
    • …
    corecore