13 research outputs found

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Using research to prepare for outbreaks of severe acute respiratory infection

    No full text
    Abstract Severe acute respiratory infections (SARI) remain one of the leading causes of mortality around the world in all age groups. There is large global variation in epidemiology, clinical management and outcomes, including mortality. We performed a short period observational data collection in critical care units distributed globally during regional peak SARI seasons from 1 January 2016 until 31 August 2017, using standardised data collection tools. Data were collected for 1 week on all admitted patients who met the inclusion criteria for SARI, with follow-up to hospital discharge. Proportions of patients across regions were compared for microbiology, management strategies and outcomes. Regions were divided geographically and economically according to World Bank definitions. Data were collected for 682 patients from 95 hospitals and 23 countries. The overall mortality was 9.5%. Of the patients, 21.7% were children, with case fatality proportions of 1% for those less than 5 years. The highest mortality was in those above 60 years, at 18.6%. Case fatality varied by region: East Asia and Pacific 10.2% (21 of 206), Sub-Saharan Africa 4.3% (8 of 188), South Asia 0% (0 of 35), North America 13.6% (25 of 184), and Europe and Central Asia 14.3% (9 of 63). Mortality in low-income and low-middle-income countries combined was 4% as compared with 14% in high-income countries. Organ dysfunction scores calculated on presentation in 560 patients where full data were available revealed Sequential Organ Failure Assessment (SOFA) scores on presentation were significantly associated with mortality and hospital length of stay. Patients in East Asia and Pacific (48%) and North America (24%) had the highest SOFA scores of >12. Multivariable analysis demonstrated that initial SOFA score and age were independent predictors of hospital survival. There was variability across regions and income groupings for the critical care management and outcomes of SARI. Intensive care unit-specific factors, geography and management features were less reliable than baseline severity for predicting ultimate outcome. These findings may help in planning future outbreak severity assessments, but more globally representative data are required

    Erratum to: Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition) (Autophagy, 12, 1, 1-222, 10.1080/15548627.2015.1100356

    No full text
    No abstract available

    Long-term (180-Day) outcomes in critically Ill patients with COVID-19 in the REMAP-CAP randomized clinical trial

    No full text
    Importance The longer-term effects of therapies for the treatment of critically ill patients with COVID-19 are unknown. Objective To determine the effect of multiple interventions for critically ill adults with COVID-19 on longer-term outcomes. Design, Setting, and Participants Prespecified secondary analysis of an ongoing adaptive platform trial (REMAP-CAP) testing interventions within multiple therapeutic domains in which 4869 critically ill adult patients with COVID-19 were enrolled between March 9, 2020, and June 22, 2021, from 197 sites in 14 countries. The final 180-day follow-up was completed on March 2, 2022. Interventions Patients were randomized to receive 1 or more interventions within 6 treatment domains: immune modulators (n = 2274), convalescent plasma (n = 2011), antiplatelet therapy (n = 1557), anticoagulation (n = 1033), antivirals (n = 726), and corticosteroids (n = 401). Main Outcomes and Measures The main outcome was survival through day 180, analyzed using a bayesian piecewise exponential model. A hazard ratio (HR) less than 1 represented improved survival (superiority), while an HR greater than 1 represented worsened survival (harm); futility was represented by a relative improvement less than 20% in outcome, shown by an HR greater than 0.83. Results Among 4869 randomized patients (mean age, 59.3 years; 1537 [32.1%] women), 4107 (84.3%) had known vital status and 2590 (63.1%) were alive at day 180. IL-6 receptor antagonists had a greater than 99.9% probability of improving 6-month survival (adjusted HR, 0.74 [95% credible interval {CrI}, 0.61-0.90]) and antiplatelet agents had a 95% probability of improving 6-month survival (adjusted HR, 0.85 [95% CrI, 0.71-1.03]) compared with the control, while the probability of trial-defined statistical futility (HR >0.83) was high for therapeutic anticoagulation (99.9%; HR, 1.13 [95% CrI, 0.93-1.42]), convalescent plasma (99.2%; HR, 0.99 [95% CrI, 0.86-1.14]), and lopinavir-ritonavir (96.6%; HR, 1.06 [95% CrI, 0.82-1.38]) and the probabilities of harm from hydroxychloroquine (96.9%; HR, 1.51 [95% CrI, 0.98-2.29]) and the combination of lopinavir-ritonavir and hydroxychloroquine (96.8%; HR, 1.61 [95% CrI, 0.97-2.67]) were high. The corticosteroid domain was stopped early prior to reaching a predefined statistical trigger; there was a 57.1% to 61.6% probability of improving 6-month survival across varying hydrocortisone dosing strategies. Conclusions and Relevance Among critically ill patients with COVID-19 randomized to receive 1 or more therapeutic interventions, treatment with an IL-6 receptor antagonist had a greater than 99.9% probability of improved 180-day mortality compared with patients randomized to the control, and treatment with an antiplatelet had a 95.0% probability of improved 180-day mortality compared with patients randomized to the control. Overall, when considered with previously reported short-term results, the findings indicate that initial in-hospital treatment effects were consistent for most therapies through 6 months

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition).

    No full text

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    No full text

    A Bayesian reanalysis of the Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial

    No full text
    Background Timing of initiation of kidney-replacement therapy (KRT) in critically ill patients remains controversial. The Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial compared two strategies of KRT initiation (accelerated versus standard) in critically ill patients with acute kidney injury and found neutral results for 90-day all-cause mortality. Probabilistic exploration of the trial endpoints may enable greater understanding of the trial findings. We aimed to perform a reanalysis using a Bayesian framework. Methods We performed a secondary analysis of all 2927 patients randomized in multi-national STARRT-AKI trial, performed at 168 centers in 15 countries. The primary endpoint, 90-day all-cause mortality, was evaluated using hierarchical Bayesian logistic regression. A spectrum of priors includes optimistic, neutral, and pessimistic priors, along with priors informed from earlier clinical trials. Secondary endpoints (KRT-free days and hospital-free days) were assessed using zero–one inflated beta regression. Results The posterior probability of benefit comparing an accelerated versus a standard KRT initiation strategy for the primary endpoint suggested no important difference, regardless of the prior used (absolute difference of 0.13% [95% credible interval [CrI] − 3.30%; 3.40%], − 0.39% [95% CrI − 3.46%; 3.00%], and 0.64% [95% CrI − 2.53%; 3.88%] for neutral, optimistic, and pessimistic priors, respectively). There was a very low probability that the effect size was equal or larger than a consensus-defined minimal clinically important difference. Patients allocated to the accelerated strategy had a lower number of KRT-free days (median absolute difference of − 3.55 days [95% CrI − 6.38; − 0.48]), with a probability that the accelerated strategy was associated with more KRT-free days of 0.008. Hospital-free days were similar between strategies, with the accelerated strategy having a median absolute difference of 0.48 more hospital-free days (95% CrI − 1.87; 2.72) compared with the standard strategy and the probability that the accelerated strategy had more hospital-free days was 0.66. Conclusions In a Bayesian reanalysis of the STARRT-AKI trial, we found very low probability that an accelerated strategy has clinically important benefits compared with the standard strategy. Patients receiving the accelerated strategy probably have fewer days alive and KRT-free. These findings do not support the adoption of an accelerated strategy of KRT initiation
    corecore