25 research outputs found

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Using research to prepare for outbreaks of severe acute respiratory infection

    Get PDF

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Sleep spindles and rapid eye movement sleep as predictors of next morning cognitive performance in healthy middle-aged and older participants

    No full text
    Spindles and slow waves are hallmarks of non-rapid eye movement sleep. Both these oscillations are markers of neuronal plasticity, and play a role in memory and cognition. Normal ageing is associated with spindle and slow wave decline and cognitive changes. The present study aimed to assess whether spindle and slow wave characteristics during a baseline night predict cognitive performance in healthy older adults the next morning. Specifically, we examined performance on tasks measuring selective and sustained visual attention, declarative verbal memory, working memory and verbal fluency. Fifty-eight healthy middle-aged and older adults (aged 50-91years) without sleep disorders underwent baseline polysomnographic sleep recording followed by neuropsychological assessment the next morning. Spindles and slow waves were detected automatically on artefact-free non-rapid eye movement sleep electroencephalogram. All-night stage N2 spindle density (no./min) and mean frequency (Hz) and all-night non-rapid eye movement sleep slow wave density (no./min) and mean slope (V/s) were analysed. Pearson's correlations were performed between spindles, slow waves, polysomnography and cognitive performance. Higher spindle density predicted better performance on verbal learning, visual attention and verbal fluency, whereas spindle frequency and slow wave density or slope predicted fewer cognitive performance variables. In addition, rapid eye movement sleep duration was associated with better verbal learning potential. These results suggest that spindle density is a marker of cognitive functioning in older adults and may reflect neuroanatomic integrity. Rapid eye movement sleep may be a marker of age-related changes in acetylcholine transmission, which plays a role in new information encoding

    Unravelling Semi-Presidentialism : Democracy and Government Performance in Four Distinct Regime Types

    No full text
    Do semi-presidential regimes perform worse than other regime types? Semi-presidentialism has become a preferred choice among constitution makers worldwide. The semi-presidential category contains anything but a coherent set of regimes. We need to separate between its two subtypes, premier-presidentialism and president-parliamentarism. Following Linz’s argument that presidentialism and semi-presidentialism are less conducive to democracy than parliamentarism a number of studies have empirically analyzed the functioning and performance of semi-presidentialism. However, these studies have investigated the performance of semi-presidential sub-types in isolation from other constitutional regimes. By using indicators on regime performance and democracy, the aim of this study is to examine the performance of premier-presidential and president-parliamentary regimes in relation to parliamentarism and presidentialism. Premier-presidential regimes show performance records on par with parliamentarism and on some measures even better. President-parliamentary regimes, on the contrary, perform worse than all other regime types on most of our included measures. The results of this novel study provide a strong call to constitution makers to stay away from president-parliamentarism as well as against the idea of thinking about semi-presidentialism as a single and coherent type of regime.Open Access APC beslut 14/2017</p
    corecore