11 research outputs found

    Association of treatments for acute myocardial infarction and survival for seven common comorbidity states : a nationwide cohort study

    Get PDF
    BACKGROUND: Comorbidity is common and has a substantial negative impact on the prognosis of patients with acute myocardial infarction (AMI). Whilst receipt of guideline-indicated treatment for AMI is associated with improved prognosis, the extent to which comorbidities influence treatment provision its efficacy is unknown. Therefore, we investigated the association between treatment provision for AMI and survival for seven common comorbidities. METHODS: We used data of 693,388 AMI patients recorded in the Myocardial Ischaemia National Audit Project (MINAP), 2003-2013. We investigated the association between comorbidities and receipt of optimal care for AMI (receipt of all eligible guideline-indicated treatments), and the effect of receipt of optimal care for comorbid AMI patients on long-term survival using flexible parametric survival models. RESULTS: A total of 412,809 [59.5%] patients with AMI had at least one comorbidity, including hypertension (302,388 [48.7%]), diabetes (122,228 [19.4%]), chronic obstructive pulmonary disease (COPD, 89,221 [14.9%]), cerebrovascular disease (51,883 [8.6%]), chronic heart failure (33,813 [5.6%]), chronic renal failure (31,029 [5.0%]) and peripheral vascular disease (27,627 [4.6%]). Receipt of optimal care was associated with greatest survival benefit for patients without comorbidities (HR 0.53, 95% CI 0.51-0.56) followed by patients with hypertension (HR 0.60, 95% CI 0.58-0.62), diabetes (HR 0.83, 95% CI 0.80-0.87), peripheral vascular disease (HR 0.85, 95% CI 0.79-0.91), renal failure (HR 0.89, 95% CI 0.84-0.94) and COPD (HR 0.90, 95% CI 0.87-0.94). For patients with heart failure and cerebrovascular disease, optimal care for AMI was not associated with improved survival. CONCLUSIONS: Overall, guideline-indicated care was associated with improved long-term survival. However, this was not the case in AMI patients with concomitant heart failure or cerebrovascular disease. There is therefore a need for novel treatments to improve outcomes for AMI patients with pre-existing heart failure or cerebrovascular disease

    Multimorbidity clusters among people with serious mental illness : a representative primary and secondary data linkage cohort study

    Get PDF
    BACKGROUND: People with serious mental illness (SMI) experience higher mortality partially attributable to higher long-term condition (LTC) prevalence. However, little is known about multiple LTCs (MLTCs) clustering in this population. METHODS: People from South London with SMI and two or more existing LTCs aged 18+ at diagnosis were included using linked primary and mental healthcare records, 2012-2020. Latent class analysis (LCA) determined MLTC classes and multinominal logistic regression examined associations between demographic/clinical characteristics and latent class membership. RESULTS: The sample included 1924 patients (mean (s.d.) age 48.2 (17.3) years). Five latent classes were identified: 'substance related' (24.9%), 'atopic' (24.2%), 'pure affective' (30.4%), 'cardiovascular' (14.1%), and 'complex multimorbidity' (6.4%). Patients had on average 7-9 LTCs in each cluster. Males were at increased odds of MLTCs in all four clusters, compared to the 'pure affective'. Compared to the largest cluster ('pure affective'), the 'substance related' and the 'atopic' clusters were younger [odds ratios (OR) per year increase 0.99 (95% CI 0.98-1.00) and 0.96 (0.95-0.97) respectively], and the 'cardiovascular' and 'complex multimorbidity' clusters were older (ORs 1.09 (1.07-1.10) and 1.16 (1.14-1.18) respectively). The 'substance related' cluster was more likely to be White, the 'cardiovascular' cluster more likely to be Black (compared to White; OR 1.75, 95% CI 1.10-2.79), and both more likely to have schizophrenia, compared to other clusters. CONCLUSION: The current study identified five latent class MLTC clusters among patients with SMI. An integrated care model for treating MLTCs in this population is recommended to improve multimorbidity care

    Serum Thyroid Function, Mortality and Disability in Advanced Old Age: The Newcastle 85+ Study

    Get PDF
    Context: Perturbations in thyroid function are common in older individuals but their significance in the very old is not fully understood. Objective: This study sought to determine whether thyroid hormone status and variation of thyroid hormones within the reference range correlated with mortality and disability in a cohort of 85-year-olds. Design: A cohort of 85-year-old individuals were assessed in their own homes (community or institutional care) for health status and thyroid function, and followed for mortality and disability for up to 9 years. Setting and Participants: Six hundred and forty-three 85-year-olds registered with participating general practices in Newcastle and North Tyneside, United Kingdom. Main Outcomes: All-cause mortality, cardiovascular mortality, and disability according to thyroid disease status and baseline thyroid hormone parameters (serum TSH, FT4, FT3, and rT3). Models were adjusted for age, sex, education, body mass index, smoking, and disease count. Results: After adjustment for age and sex, all-cause mortality was associated with baseline serum rT3 and FT3 (both P < .001), but not FT4 or TSH. After additional adjustment for potential confounders, only rT3 remained significantly associated with mortality (P = .001). Baseline serum TSH and rT3 predicted future disability trajectories in men and women, respectively. Conclusions: Our study is reassuring that individuals age 85 y with both subclinical hypothyroidism and subclinical hyperthyroidism do not have a significantly worse survival over 9 years than their euthyroid peers. However, thyroid function tests did predict disability, with higher serum TSH levels predicting better outcomes. These data strengthen the argument for routine use of age-specific thyroid function reference ranges

    Burden of non-communicable diseases among adolescents aged 10–24 years in the EU, 1990–2019: a systematic analysis of the Global Burden of Diseases Study 2019

    Get PDF
    Background Disability and mortality burden of non-communicable diseases (NCDs) have risen worldwide; however, the NCD burden among adolescents remains poorly described in the EU. Methods Estimates were retrieved from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019. Causes of NCDs were analysed at three different levels of the GBD 2019 hierarchy, for which mortality, years of life lost (YLLs), years lived with disability (YLDs), and disability-adjusted life-years (DALYs) were extracted. Estimates, with the 95% uncertainty intervals (UI), were retrieved for EU Member States from 1990 to 2019, three age subgroups (10–14 years, 15–19 years, and 20–24 years), and by sex. Spearman's correlation was conducted between DALY rates for NCDs and the Socio-demographic Index (SDI) of each EU Member State. Findings In 2019, NCDs accounted for 86·4% (95% uncertainty interval 83·5–88·8) of all YLDs and 38·8% (37·4–39·8) of total deaths in adolescents aged 10–24 years. For NCDs in this age group, neoplasms were the leading causes of both mortality (4·01 [95% uncertainty interval 3·62–4·25] per 100 000 population) and YLLs (281·78 [254·25–298·92] per 100 000 population), whereas mental disorders were the leading cause for YLDs (2039·36 [1432·56–2773·47] per 100 000 population) and DALYs (2040·59 [1433·96–2774·62] per 100 000 population) in all EU Member States, and in all studied age groups. In 2019, among adolescents aged 10–24 years, males had a higher mortality rate per 100 000 population due to NCDs than females (11·66 [11·04–12·28] vs 7·89 [7·53–8·23]), whereas females presented a higher DALY rate per 100 000 population due to NCDs (8003·25 [5812·78–10 701·59] vs 6083·91 [4576·63–7857·92]). From 1990 to 2019, mortality rate due to NCDs in adolescents aged 10–24 years substantially decreased (–40·41% [–43·00 to –37·61), and also the YLL rate considerably decreased (–40·56% [–43·16 to –37·74]), except for mental disorders (which increased by 32·18% [1·67 to 66·49]), whereas the YLD rate increased slightly (1·44% [0·09 to 2·79]). Positive correlations were observed between DALY rates and SDIs for substance use disorders (rs=0·58, p=0·0012) and skin and subcutaneous diseases (rs=0·45, p=0·017), whereas negative correlations were found between DALY rates and SDIs for cardiovascular diseases (rs=–0·46, p=0·015), neoplasms (rs=–0·57, p=0·0015), and sense organ diseases (rs=–0·61, p=0·0005)

    Deep sclerectomy versus trabeculectomy: a morphological study with anterior segment optical coherence tomography

    No full text
    PurposeTo investigate the intraocular pressure (IOP) lowering mechanisms of deep sclerectomy (DS) with anterior segment optical coherence tomography (AS-OCT).MethodsIn a prospective cross-sectional study, AS-OCT parameters were compared between DS, trabeculectomy and control cases. Association with IOP and success (IOP?16?mm?Hg without medication) was investigated.Results18 DS (15 patients), 17 trabeculectomy (16 patients) and 15 controls (15 patients) were examined. Successful had a taller intrascleral lake (IL) and thicker conjunctival/Tenon's layer (CTL) than non-successful cases (513.3 vs 361.1?µm, p=0.027 and 586.7 vs 251.1?µm, p&lt;0.001, respectively). CTL thickness correlated with IOP (r=?0.6407, p=0.004). CTL thickness was significantly different between controls, DS and trabeculectomy (mean (SD): 203.3 (62.6) vs 418.9 (261.9) vs 604.1 (220.7)?µm, p&lt;0.0001). Successful trabeculectomy cases had a taller bleb cavity (BC) than non-successful cases (607.5 vs 176.7?µm, p=0.041). CTL microcysts were detected in 50% of DS and 52.9% of trabeculectomy cases (p=1).ConclusionsTrans-conjunctival aqueous percolation was identified as a novel DS drainage route. DS had a fluid reservoir below the scleral flap, the IL, in analogy to the trabeculectomy BC. A postoperative tall IL and a thick CTL were associated with good outcome

    Trifocal versus extended depth of focus (EDOF) intraocular lenses after cataract extraction

    No full text
    BACKGROUND: Cataract, defined as an opacity of the lens in one or both eyes, is the leading cause of blindness worldwide. Cataract may initially be treated with new spectacles, but often surgery is required, which involves removing the cataract and placing a new artificial lens, usually made from hydrophobic acrylic. Recent advancements in intraocular lens (IOL) technology have led to the emergence of a diverse array of implantable lenses that aim to minimise spectacle dependence at all distances (near, intermediate, and distance). To assess the relative merits of these lenses, measurements of visual acuity are needed. Visual acuity is a measurement of the sharpness of vision at a distance of 6 metres (or 20 feet). Normal vision is 6/6 (or 20/20). The Jaegar eye card is used to measure near visual acuity. J1 is the smallest text and J2 is considered equivalent to 6/6 (or 20/20) for near vision. OBJECTIVES: To compare visual outcomes after implantation of trifocal intraocular lenses (IOLs) to those of extended depth of focus (EDOF) IOLs. To produce a brief economic commentary summarising recent economic evaluations that compare trifocal IOLs with EDOF IOLs. SEARCH METHODS: We searched CENTRAL (which contains the Cochrane Eyes and Vision Trials Register), MEDLINE, Embase, and three trial registries on 15 June 2022. For our economic evaluation, we also searched MEDLINE and Embase using economic search filters to 15 June 2022, and the NHS Economic Evaluation Database (EED) from 1968 up to and including 31 December 2014. We did not use any date or language restrictions in the electronic searches. SELECTION CRITERIA: We included studies comparing trifocal and EDOF IOLs in adults undergoing cataract surgery. We did not include studies involving people receiving IOLs for correction of refractive error alone (or refractive lens exchange in the absence of cataract). DATA COLLECTION AND ANALYSIS: We used standard Cochrane methods. Two review authors working independently selected studies for inclusion and extracted data from the reports. We assessed the risk of bias in the studies, and we assessed the certainty of the evidence using the GRADE approach. MAIN RESULTS: We included five studies that compared trifocal and EDOF lenses in people undergoing cataract surgery. Three trifocal lenses (AcrySof IQ PanOptix, ATLISA Tri 839MP, FineVision Micro F) and one EDOF lens (TECNIS Symfony ZXR00) were evaluated. The studies took place in Europe and North America. Follow-up ranged from three to six months. Of the 239 enroled participants, 233 (466 eyes) completed follow-up and were included in the analyses. The mean age of participants was 68.2 years, and 64% of participants were female. In general, the risk of bias in the studies was unclear as methods for random sequence generation and allocation concealment were poorly reported, and we judged one study to be at high risk of performance and detection bias. We assessed the certainty of the evidence for all outcomes as low, downgrading for the risk of bias and for imprecision. In two studies involving a total of 254 people, there was little or no difference between trifocal and EDOF lenses for uncorrected and corrected distance visual acuity worse than 6/6. Sixty per cent of participants in both groups had uncorrected distance visual acuity worse than 6/6 (risk ratio (RR) 1.06, 95% confidence intervals (CI) 0.88 to 1.27). Thirty-one per cent of the trifocal group and 38% of the EDOF group had corrected distance visual acuity worse than 6/6 (RR 1.04, 95% CI 0.78 to 1.39). In one study of 60 people, there were fewer cases of uncorrected near visual acuity worse than J2 in the trifocal group (3%) compared with the EDOF group (30%) (RR 0.08, 95% CI 0.01 to 0.65). In two studies, participants were asked about spectacle independence using subjective questionnaires. There was no evidence of either lens type being superior. One further study of 60 participants reported, "overall, 90% of patients achieved spectacle independence", but did not categorise this by lens type. All studies included postoperative patient-reported visual function, which was measured using different questionnaires. Irrespective of the questionnaire used, both types of lenses scored well, and there was little evidence of any important differences between them. Two studies included patient-reported ocular aberrations (glare and halos). The outcomes were reported in different ways and could not be pooled; individually, these studies were too small to detect meaningful differences in glare and halos between groups. One study reported no surgical complications. Three studies did not mention surgical complications. One study reported YAG capsulotomy for posterior capsular opacification (PCO) in one participant (one eye) in each group. One study reported no PCO. Two studies did not report PCO. One study reported that three participants (one trifocal and two EDOF) underwent laser-assisted subepithelial keratectomy (LASEK) to correct residual myopic refractive error or astigmatism. One study reported a subset of participants who were considering laser enhancement at the end of the study period (nine trifocal and two EDOF). Two studies did not report laser enhancement rates. No economic evaluation studies were identified for inclusion in this review. AUTHORS' CONCLUSIONS: Distance visual acuity after cataract surgery may be similar whether the lenses implanted are trifocal IOLs or EDOF (TECNIS Symfony) IOLs. People receiving trifocal IOLs may achieve better near vision and may be less dependent on spectacles for near vision. Both lenses were reported to have adverse subjective visual phenomena, such as glare and halos, with no meaningful difference detected between lenses

    Deconstructing Complex Multimorbidity in the Very Old: Findings from the Newcastle 85+ Study

    No full text
    Objectives. To examine the extent and complexity of the morbidity burden in 85-year-olds; identify patterns within multimorbidity; and explore associations with medication and healthcare use. Participants. 710 men and women; mean (SD) age 85.5 (0.4) years. Methods. Data on 20 chronic conditions (diseases and geriatric conditions) ascertained from general practice records and participant assessment. Cluster analysis within the multimorbid sample identified subgroups sharing morbidity profiles. Clusters were compared on medication and healthcare use. Results. 92.7% (658/710) of participants had multimorbidity; median number of conditions: 4 (IQR 3–6). Cluster analysis (multimorbid sample) identified five subgroups sharing similar morbidity profiles; 60.0% (395/658) of participants belonged to one of two high morbidity clusters, with only 4.9% (32/658) in the healthiest cluster. Healthcare use was high, with polypharmacy (≥5 medications) in 69.8% (459/658). Between-cluster differences were found in medication count (p=0.0001); hospital admissions (p=0.022); and general practitioner (p=0.034) and practice nurse consultations (p=0.011). Morbidity load was related to medication burden and use of some, but not all, healthcare services. Conclusions. The majority of 85-year-olds had extensive and complex morbidity. Elaborating participant clusters sharing similar morbidity profiles will help inform future healthcare provision and the identification of common underlying biological mechanisms

    Quality of acute myocardial infarction care in England and Wales during the COVID-19 pandemic: linked nationwide cohort study.

    No full text
    BACKGROUND AND OBJECTIVE: The impact of the COVID-19 pandemic on the quality of care for patients with acute myocardial infarction (AMI) is uncertain. We aimed to compare quality of AMI care in England and Wales during and before the COVID-19 pandemic using the 2020 European Society of Cardiology Association for Acute Cardiovascular Care quality indicators (QIs) for AMI. METHODS: Cohort study of linked data from the AMI and the percutaneous coronary intervention registries in England and Wales between 1 January 2017 and 27 May 2020 (representing 236 743 patients from 186 hospitals). At the patient level, the likelihood of attainment for each QI compared with pre COVID-19 was calculated using logistic regression. The date of the first national lockdown in England and Wales (23 March 2020) was chosen for time series comparisons. RESULTS: There were 10 749 admissions with AMI after 23 March 2020. Compared with before the lockdown, patients admitted with AMI during the first wave had similar age (mean 68.0 vs 69.0 years), with no major differences in baseline characteristics (history of diabetes (25% vs 26%), renal failure (6.4% vs 6.9%), heart failure (5.8% vs 6.4%) and previous myocardial infarction (22.9% vs 23.7%)), and less frequently had high Global Registry of Acute Coronary Events risk scores (43.6% vs 48.6%). There was an improvement in attainment for 10 (62.5%) of the 16 measured QIs including a composite QI (43.8% to 45.2%, OR 1.06, 95%?CI 1.02 to 1.10) during, compared with before, the lockdown. CONCLUSION: During the first wave of the COVID-19 pandemic in England and Wales, quality of care for AMI as measured against international standards did not worsen, but improved modestly
    corecore