101 research outputs found

    Body mass index and survival in people with heart failure

    Get PDF
    Aims: In people with heart failure (HF), a high body mass index (BMI) has been linked with better outcomes (‘obesity paradox’), but there is limited evidence in community populations across long-term follow-up. We aimed to examine the association between BMI and long-term survival in patients with HF in a large primary care cohort. Methods: We included patients with incident HF aged ≥45 years from the Clinical Practice Research Datalink (2000-2017). We used Kaplan-Meier curves, Cox regression, and penalised splines methods to assess the association of pre-diagnostic BMI, based on WHO classification, with all-cause mortality. Results: There were 47,531 participants with HF (median age 78.0 years (IQR 70-84), 45.8% female, 79.0% white ethnicity, median BMI 27.1 (IQR 23.9-31.0)) and 25,013 (52.6%) died during follow-up. Compared to healthy weight, people with overweight (HR 0.78, 95%CI 0.75-0.81, risk difference (RD) -4.1%), obesity class I (HR 0.76, 95%CI 0.73-0.80, RD -4.5%) and class II (HR 0.76, 95%CI 0.71-0.81, RD -4.5%) were at decreased risk of death, whereas people with underweight were at increased risk (HR 1.59, 95%CI 1.45-1.75, RD 11.2%). In those underweight, this risk was greater among men than women (p-value for interaction = 0.02). Class III obesity was associated with increased risk of all-cause mortality compared to overweight (HR 1.23, 95%CI 1.17-1.29). Conclusion: The U-shaped relationship between BMI and long-term all-cause mortality suggests a personalised approach to identifying optimal weight may be needed for patients with HF in primary care. Underweight people have the poorest prognosis and should be recognised as high-risk

    Disparities in COVID-19 mortality amongst the immunosuppressed: a systematic review and meta-analysis for enhanced disease surveillance

    Get PDF
    Background: Effective disease surveillance, including that for COVID-19, is compromised without a standardised method for categorising the immunosuppressed as a clinical risk group. Methods: We conducted a systematic review and meta-analysis to evaluate whether excess COVID-associated mortality compared to the immunocompetent could meaningfully subdivide the immunosuppressed. Our study adhered to UK Immunisation against infectious disease (Green Book) criteria for defining and categorising immunosuppression. Using OVID (EMBASE, MEDLINE, Transplant Library, and Global Health), PubMed, and Google Scholar, we examined relevant literature between the entirety of 2020 and 2022. We selected for cohort studies that provided mortality data for immunosuppressed subgroups and immunocompetent controls. Meta-analyses, grey literature and any original works that failed to provide comparator data or reported all-cause or paediatric outcomes were excluded. Odds Ratios (OR) and 95% confidence intervals (CI) of COVID-19 mortality were meta-analysed by immunosuppressed category and subcategory. Subgroup analyses differentiated estimates by effect measure, country income, study setting, level of adjustment, use of matching and publication year. Study screening, extraction and bias assessment were performed blinded and independently by two researchers; conflicts were resolved with the oversight of a third researcher. PROSPERO registration number is CRD42022360755. Findings: We identified 99 unique studies, incorporating data from 1,542,097 and 56,248,181 unique immunosuppressed and immunocompetent patients with COVID-19 infection, respectively. Compared to immunocompetent people (pooled OR, 95%CI), solid organ transplants (2.12, 1.50-2.99) and malignancy (2.02, 1.69-2.42) patients had a very high risk of COVID-19 mortality. Patients with rheumatological conditions (1.28, 1.13-1.45) and HIV (1.20, 1.05-1.36) had just slightly higher risks than the immunocompetent baseline. Case type, setting income and mortality data matching and adjustment were significant modifiers of excess immunosuppressed mortality for some immunosuppressed subgroups. Interpretation: Excess COVID-associated mortality among the immunosuppressed compared to immunocompetent was seen to vary significantly across subgroups. This novel means of subdivision has prospective benefit for targeting patient triage, shielding and vaccination policies during periods of high disease transmission. Funding: Supported by EMIS Health and the UK Medical Research Council. Grant number: MR/R015708/1

    Thrombocytopenic, thromboembolic and haemorrhagic events following second dose with BNT162b2 and ChAdOx1: self-controlled case series analysis of the English national sentinel cohort

    Get PDF
    Thrombosis associated with thrombocytopenia was a matter of concern post first and second doses of BNT162b2 and ChAdOx1 COVID-19 vaccines. Therefore, it is important to investigate the risk of thrombocytopenic, thromboembolic and haemorrhagic events following a second dose of BNT162b2 and ChAdOx1 COVID-19 vaccines. We conducted a large-scale self-controlled case series analysis, using routine primary care data linked to hospital data, among 12.3 million individuals (16 years old and above) in England. We used the nationally representative Oxford-Royal College of General Practitioners (RCGP) sentinel network database with baseline and risk periods between 8th December 2020 and 11th June 2022. We included individuals who received two vaccine (primary) doses of the BNT162b2 mRNA (Pfizer-BioNTech) and two vaccine doses of ChAdOx1 nCoV-19 (Oxford-AstraZeneca) vaccines in our analyses. We carried out a self-controlled case series (SCCS) analysis for each outcome using a conditional Poisson regression model with an offset for the length of risk period. We reported the incidence rate ratios (IRRs) and 95% confidence intervals (CI) of thrombocytopenic, thromboembolic (including arterial and venous events) and haemorrhagic events, in the period of 0-27 days after receiving a second dose of BNT162b2 or ChAdOx1 vaccines compared to the baseline period (14 or more days prior to first dose, 28 or more days after the second dose and the time between 28 or more days after the first and 14 or more days prior to the second dose). We adjusted for a range of potential confounders, including age, sex, comorbidities and deprivation. Between December 8, 2020 and February 11, 2022, 6,306,306 individuals were vaccinated with two doses of BNT162b2 and 6,046,785 individuals were vaccinated with two doses of ChAdOx1. Compared to the baseline, our analysis show no increased risk of venous thromboembolic events (VTE) for both BNT162b2 (IRR 0.71, 95% CI: 0.65-0.770) and ChAdOx1 (IRR 0.91, 95% CI: 0.84-0.98); and similarly there was no increased risk for cerebral venous sinus thrombosis (CVST) for both BNT162b2 (IRR 0.87, 95% CI: 0.41-1.85) and ChAdOx1 (IRR 1.73, 95% CI: 0.82-3.68). We additionally report no difference in IRR for pulmonary embolus, and deep vein thrombosis, thrombocytopenia, including idiopathic thrombocytopenic purpura (ITP), and haemorrhagic events post second dose for both BNT162b2. Reassuringly, we found no associations between increased risk of thrombocytopenic, thromboembolic and haemorrhagic events post vaccination with second dose for either of these vaccines. Data and Connectivity: COVID-19 Vaccines Pharmacovigilance study

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Comparative efficacy of drugs for treating giardiasis: a systematic update of the literature and network meta-analysis of randomized clinical trials

    No full text
    Background: Giardiasis is the commonest intestinal protozoal infection worldwide. The current first-choice therapy is metronidazole. Recently, other drugs with potentially higher efficacy or with fewer and milder side effects have increased in popularity, but evidence is limited by a scarcity of randomized controlled trials (RCTs) comparing the many treatment options available. Network meta-analysis (NMA) is a useful tool to compare multiple treatments when there is limited or no direct evidence available. Objectives: To compare the efficacy and side effects of all available drugs for the treatment of giardiasis. Methods: We selected all RCTs included in systematic-reviews and expert reviews of all treatments for giardiasis published until 2014, extended the systematic literature search until 2016, and identified new studies by scanning reference lists for relevant studies. We then conducted a NMA of all available treatments of giardiasis by comparing parasitological cure (efficacy) and side effects. Results: We identified 60 RCTs from 58 reports (46 from published systematic-reviews, 8 from reference lists and 4 from the updated systematic search). Data from 6,714 patients, 18 treatments and 42 treatment comparisons were available. Tinidazole was associated with higher parasitological cure than metronidazole (RR: 1.23, 95% CI: 1.12-1.35) and albendazole (RR: 1.35, 95% CI: 1.21-1.50). Taking into consideration clinical efficacy, side effects and size of the evidence, tinidazole was found to be the most effective drug. Conclusions: We provide additional evidence that tinidazole single dose is the best available treatment of giardiasis in symptomatic and asymptomatic children and adults

    Diagnostic value of symptoms and signs for identifying urinary tract infection in older adult outpatients: systematic review and meta-analysis

    No full text
    Objectives To critically appraise and evaluate the diagnostic value of symptoms and signs in identifying UTI in older adult outpatients, using evidence from observational studies. Methods We searched Medline and Medline in process, Embase and Web of Science, from inception up to September 2017. We included studies assessing the diagnostic accuracy of symptoms and/or signs in predicting UTI in outpatients aged 65 years and above. Study quality was assessed using the QUADAS-2 tool. Results We identified 15 eligible studies of variable quality, with a total of 12,039 participants (range 65–4259), and assessed the diagnostic accuracy of 66 different symptoms and signs in predicting UTI. A number of symptoms and signs typically associated with UTI, such as nocturia, urgency and abnormal vital signs, were of limited use in older adult outpatients. Inability to perform a number of acts of daily living were predictors of UTI: For example, disability in feeding oneself, + ve LR: 11.8 (95% CI 5.51–25.2) and disability in washing one's hands and face, + ve LR: 6.84 (95% CI 4.08–11.5). Conclusions The limited evidence of varying quality shows that a number of symptoms and signs traditionally associated with UTI may have limited diagnostic value in older adult outpatients

    Diagnostic value of symptoms and signs for identifying urinary tract infection in older adult outpatients: systematic review and meta-analysis

    No full text
    Objectives To critically appraise and evaluate the diagnostic value of symptoms and signs in identifying UTI in older adult outpatients, using evidence from observational studies. Methods We searched Medline and Medline in process, Embase and Web of Science, from inception up to September 2017. We included studies assessing the diagnostic accuracy of symptoms and/or signs in predicting UTI in outpatients aged 65 years and above. Study quality was assessed using the QUADAS-2 tool. Results We identified 15 eligible studies of variable quality, with a total of 12,039 participants (range 65–4259), and assessed the diagnostic accuracy of 66 different symptoms and signs in predicting UTI. A number of symptoms and signs typically associated with UTI, such as nocturia, urgency and abnormal vital signs, were of limited use in older adult outpatients. Inability to perform a number of acts of daily living were predictors of UTI: For example, disability in feeding oneself, + ve LR: 11.8 (95% CI 5.51–25.2) and disability in washing one's hands and face, + ve LR: 6.84 (95% CI 4.08–11.5). Conclusions The limited evidence of varying quality shows that a number of symptoms and signs traditionally associated with UTI may have limited diagnostic value in older adult outpatients

    Interactive visualisation for interpreting diagnostic test accuracy study results

    No full text
    Information about the performance of diagnostic tests is typically presented in the form of measures of test accuracy such as sensitivity and specificity. These measures may be difficult to translate directly into decisions about patient treatment, for which information presented in the form of probabilities of disease after a positive or a negative test result may be more useful. These probabilities depend on the prevalence of the disease, which is likely to vary between populations. This article aims to clarify the relationship between pre-test (prevalence) and post-test probabilities of disease, and presents two free, online, interactive tools to illustrate this relationship. These tools allow probabilities of disease to be compared to decision thresholds above and below which different treatment decisions may be indicated. They are intended to help those involved in communicating information about diagnostic test performance and are likely to be of benefit when teaching these concepts. A substantive example is presented using C-reactive protein as a diagnostic marker for bacterial infection in the older adult population. The tools may also be useful for manufacturers of clinical tests in planning product development, for authors of test evaluation studies to improve reporting, and for users of test evaluations to facilitate interpretation and application of the results

    Impact of prediagnostic smoking and smoking cessation on colorectal cancer prognosis: a meta-analysis of individual patient data from cohorts within the CHANCES consortium

    No full text
    Background: Smoking has been associated with colorectal cancer (CRC) incidence and mortality in previous studies and might also be associated with prognosis after CRC diagnosis. However, current evidence on smoking in association with CRC prognosis is limited. Patients and methods: For this individual patient data meta-analysis, sociodemographic and smoking behavior information of 12 414 incident CRC patients (median age at diagnosis: 64.3 years), recruited within 14 prospective cohort studies among previously cancer-free adults, was collected at baseline and harmonized across studies. Vital status and causes of death were collected for a mean follow-up time of 5.1 years following cancer diagnosis. Associations of smoking behavior with overall and CRC-specific survival were evaluated using Cox regression and standard meta-analysis methodology. Results: A total of 5229 participants died, 3194 from CRC. Cox regression revealed significant associations between former [hazard ratio (HR) = 1.12; 95 % confidence interval (CI) = 1.04-1.20] and current smoking (HR = 1.29; 95% CI = 1.04-1.60) and poorer overall survival compared with never smoking. Compared with current smoking, smoking cessation was associated with improved overall (HR<10years = 0.78; 95% CI = 0.69-0.88; HR >= 10years = 0.78; 95% CI = 0.63-0.97) and CRC-specific survival (HR similar to 10 years = 0.76; 95% CI = 0.67-0.85). Conclusion: In this large meta-analysis including primary data of incident CRC patients from 14 prospective cohort studies on the association between smoking and CRC prognosis, former and current smoking were associated with poorer CRC prognosis compared with never smoking. Smoking cessation was associated with improved survival when compared with current smokers. Future studies should further quantify the benefits of nonsmoking, both for cancer prevention and for improving survival among CRC patients, in particular also in terms of treatment response
    corecore