16 research outputs found

    Multiorgan MRI findings after hospitalisation with COVID-19 in the UK (C-MORE): a prospective, multicentre, observational cohort study

    Get PDF
    Introduction: The multiorgan impact of moderate to severe coronavirus infections in the post-acute phase is still poorly understood. We aimed to evaluate the excess burden of multiorgan abnormalities after hospitalisation with COVID-19, evaluate their determinants, and explore associations with patient-related outcome measures. Methods: In a prospective, UK-wide, multicentre MRI follow-up study (C-MORE), adults (aged ≥18 years) discharged from hospital following COVID-19 who were included in Tier 2 of the Post-hospitalisation COVID-19 study (PHOSP-COVID) and contemporary controls with no evidence of previous COVID-19 (SARS-CoV-2 nucleocapsid antibody negative) underwent multiorgan MRI (lungs, heart, brain, liver, and kidneys) with quantitative and qualitative assessment of images and clinical adjudication when relevant. Individuals with end-stage renal failure or contraindications to MRI were excluded. Participants also underwent detailed recording of symptoms, and physiological and biochemical tests. The primary outcome was the excess burden of multiorgan abnormalities (two or more organs) relative to controls, with further adjustments for potential confounders. The C-MORE study is ongoing and is registered with ClinicalTrials.gov, NCT04510025. Findings: Of 2710 participants in Tier 2 of PHOSP-COVID, 531 were recruited across 13 UK-wide C-MORE sites. After exclusions, 259 C-MORE patients (mean age 57 years [SD 12]; 158 [61%] male and 101 [39%] female) who were discharged from hospital with PCR-confirmed or clinically diagnosed COVID-19 between March 1, 2020, and Nov 1, 2021, and 52 non-COVID-19 controls from the community (mean age 49 years [SD 14]; 30 [58%] male and 22 [42%] female) were included in the analysis. Patients were assessed at a median of 5·0 months (IQR 4·2–6·3) after hospital discharge. Compared with non-COVID-19 controls, patients were older, living with more obesity, and had more comorbidities. Multiorgan abnormalities on MRI were more frequent in patients than in controls (157 [61%] of 259 vs 14 [27%] of 52; p<0·0001) and independently associated with COVID-19 status (odds ratio [OR] 2·9 [95% CI 1·5–5·8]; padjusted=0·0023) after adjusting for relevant confounders. Compared with controls, patients were more likely to have MRI evidence of lung abnormalities (p=0·0001; parenchymal abnormalities), brain abnormalities (p<0·0001; more white matter hyperintensities and regional brain volume reduction), and kidney abnormalities (p=0·014; lower medullary T1 and loss of corticomedullary differentiation), whereas cardiac and liver MRI abnormalities were similar between patients and controls. Patients with multiorgan abnormalities were older (difference in mean age 7 years [95% CI 4–10]; mean age of 59·8 years [SD 11·7] with multiorgan abnormalities vs mean age of 52·8 years [11·9] without multiorgan abnormalities; p<0·0001), more likely to have three or more comorbidities (OR 2·47 [1·32–4·82]; padjusted=0·0059), and more likely to have a more severe acute infection (acute CRP >5mg/L, OR 3·55 [1·23–11·88]; padjusted=0·025) than those without multiorgan abnormalities. Presence of lung MRI abnormalities was associated with a two-fold higher risk of chest tightness, and multiorgan MRI abnormalities were associated with severe and very severe persistent physical and mental health impairment (PHOSP-COVID symptom clusters) after hospitalisation. Interpretation: After hospitalisation for COVID-19, people are at risk of multiorgan abnormalities in the medium term. Our findings emphasise the need for proactive multidisciplinary care pathways, with the potential for imaging to guide surveillance frequency and therapeutic stratification

    Teaching XML in a web development context

    No full text

    A multicomponent structured health behaviour intervention to improve physical activity in long-distance HGV drivers: the SHIFT cluster RCT

    Get PDF
    Abstract Background Long-distance heavy goods vehicle drivers are exposed to a multitude of risk factors associated with their occupation. The working environment of heavy goods vehicle drivers provides limited opportunities for a healthy lifestyle, and, consequently, heavy goods vehicle drivers exhibit higher than nationally representative rates of obesity and obesity-related comorbidities, and are underserved in terms of health promotion initiatives. Objective The aim of this trial was to test the effectiveness and cost-effectiveness of the multicomponent Structured Health Intervention For Truckers (SHIFT) programme, compared with usual care, at both 6 months and 16–18 months. Design A two-arm cluster randomised controlled trial, including a cost-effectiveness analysis and process evaluation. Setting Transport depots throughout the Midlands region of the UK. Participants Heavy goods vehicle drivers. Intervention The 6-month SHIFT programme included a group-based interactive 6-hour education session, health coach support and equipment provision [including a Fitbit® (Fitbit Inc., San Francisco, CA, US) and resistance bands/balls to facilitate a ‘cab workout’]. Clusters were randomised following baseline measurements to either the SHIFT arm or the control arm. Main outcome measures Outcome measures were assessed at baseline, with follow-up assessments occurring at both 6 months and 16–18 months. The primary outcome was device-measured physical activity, expressed as mean steps per day, at 6-month follow-up. Secondary outcomes included device-measured sitting, standing, stepping, physical activity and sleep time (on any day, workdays and non-workdays), along with adiposity, biochemical measures, diet, blood pressure, psychophysiological reactivity, cognitive function, functional fitness, mental well-being, musculoskeletal symptoms and work-related psychosocial variables. Cost-effectiveness and process evaluation data were collected. Results A total of 382 participants (mean ± standard deviation age: 48.4 ± 9.4 years; mean ± standard deviation body mass index: 30.4 kg/m2 ± 5.1 kg/m2; 99% male) were recruited across 25 clusters. Participants were randomised (at the cluster level) to either the SHIFT arm (12 clusters, n = 183) or the control arm (13 clusters, n = 199). At 6 months, 209 (54.7%) participants provided primary outcome data. Significant differences in mean daily steps were found between arms, with participants in the SHIFT arm accumulating 1008 more steps per day than participants in the control arm (95% confidence interval 145 to 1871 steps; p = 0.022), which was largely driven by the maintenance of physical activity levels in the SHIFT arm and a decline in physical activity levels in the control arm. Favourable differences at 6 months were also seen in the SHIFT arm, relative to the control arm, in time spent sitting, standing and stepping, and time in moderate or vigorous activity. No differences between arms were observed at 16–18 months’ follow-up. No differences were observed between arms in the other secondary outcomes at either follow-up (i.e. 6 months and 16–18 months). The process evaluation demonstrated that the intervention was well received by participants and that the intervention reportedly had a positive impact on their health behaviours. The average total cost of delivering the SHIFT programme was £369.57 per driver, and resulting quality-adjusted life-years were similar across trial arms (SHIFT arm: 1.22, 95% confidence interval 1.19 to 1.25; control arm: 1.25, 95% confidence interval 1.22 to 1.27). Limitations A higher (31.4%) than anticipated loss to follow-up was experienced at 6 months, with fewer (54.7%) participants providing valid primary outcome data at 6 months. The COVID-19 pandemic presents a major confounding factor, which limits our ability to draw firm conclusions regarding the sustainability of the SHIFT programme. Conclusion The SHIFT programme had a degree of success in positively impacting physical activity levels and reducing sitting time in heavy goods vehicle drivers at 6-months; however, these differences were not maintained at 16–18 months. Future work Further work involving stakeholder engagement is needed to refine the content of the programme, based on current findings, followed by the translation of the SHIFT programme into a scalable driver training resource. Trial registration This trial is registered as ISRCTN10483894. Funding This project was funded by the National Institute for Health and Care Research (NIHR) Public Health Research programme and will be published in full in Public Health Research; Vol. 10, No. 12. See the NIHR Journals Library website for further project information

    The effectiveness of the Structured Health Intervention For Truckers (SHIFT): a cluster randomised controlled trial (RCT)

    No full text
    Background: Long distance heavy goods vehicle (HGV) drivers exhibit higher than nationally representative rates of obesity, and obesity-related co-morbidities, and are underserved in terms of health promotion initiatives. The purpose of this study was to evaluate the effectiveness of the multicomponent ‘Structured Health Intervention For Truckers’ (SHIFT), compared to usual care, at 6 and 16-18-months follow-up. Methods: We conducted a two-arm cluster RCT in transport sites throughout the Midlands, UK. Outcome measures were assessed at baseline, at 6- and 16-18-months follow-up. Clusters were randomised (1:1) following baseline measurements to either the SHIFT arm or usual practice control arm. The 6-month SHIFT programme included a group-based interactive 6-hour education and behaviour change session, health coach support and equipment provision (Fitbit® and resistance bands/balls to facilitate a ‘cab workout’). The primary outcome was device-assessed physical activity (mean steps/day) at 6-months. Secondary outcomes included: device-assessed sitting, physical activity intensity and sleep; cardiometabolic health, diet, mental wellbeing and work-related psychosocial variables. Data were analysed using mixed-effect linear regression models using a complete-case population. Results: 382 HGV drivers (mean±SD age: 48.4±9.4 years, BMI: 30.4±5.1 kg/m2, 99% male) were recruited across 25 clusters (sites), and randomised into either the SHIFT (12 clusters, n=183) or control (13 clusters, n=199) arms. At 6-months, 209 (55%) participants provided primary outcome data. Significant differences in mean daily steps were found between groups, in favour of the SHIFT arm (adjusted mean difference: 1008 steps/day, 95% CI: 145-1871, p=0.022). Favourable differences were also seen in the SHIFT group, relative to the control group, in time spent sitting (-24 mins/day, 95% CI: -43- -6), and moderate-to-vigorous physical activity (6 mins/day, 95% CI: 0.3-11). Differences were not maintained at 16-18 months. No differences were observed between groups in the other secondary outcomes at either follow-up. Conclusions: The SHIFT programme led to a potentially clinically meaningful difference in daily steps, between trial arms, at 6-months. While the longer-term impact is unclear, the programme offers potential to be incorporated into driver training courses to promote activity in this at-risk, underserved, and hard-to-reach essential occupational group. Trial registration: ISRCTN10483894 (date registered: 01/03/2017)</p

    Bluetongue virus genetic and phenotypic diversity: Towards identifying the molecular determinants that influence virulence and transmission potential

    No full text
    Bluetongue virus (BTV) is the prototype member of the Orbivirus genus in the family Reoviridae and is the aetiological agent of the arthropod transmitted disease bluetongue (BT) that affects both ruminant and camelid species. The disease is of significant global importance due to its economic impact and effect on animal welfare. Bluetongue virus, a dsRNA virus, evolves through a process of quasispecies evolution that is driven by genetic drift and shift as well as intragenic recombination. Quasispecies evolution coupled with founder effect and evolutionary selective pressures has over time led to the establishment of genetically distinct strains of the virus in different epidemiological systems throughout the world. Bluetongue virus field strains may differ substantially from each other with regards to their phenotypic properties (i.e. virulence and/or transmission potential). The intrinsic molecular determinants that influence the phenotype of BTV have not yet clearly been characterized. It is currently unclear what contribution each of the viral genome segments have in determining the phenotypic properties of the virus and it is also unknown how genetic variability in the individual viral genes and their functional domains relate to differences in phenotype. In order to understand how genetic variation in particular viral genes could potentially influence the phenotypic properties of the virus; a closer understanding of the BTV virion, its encoded proteins and the evolutionary mechanisms that shape the diversity of the virus is required. This review provides a synopsis of these issues and highlights some of the studies that have been conducted on BTV and the closely related African horse sickness virus (AHSV) that have contributed to ongoing attempts to identify the molecular determinants that influence the virus’ phenotype. Different strategies that can be used to generate BTV mutants in vitro and methods through which the causality between particular genetic modifications and changes in phenotype may be determined are also described. Finally examples are highlighted where a clear understanding of the molecular determinants that influence the phenotype of the virus may have contributed to risk assessment and mitigation strategies during recent outbreaks of BT in Europe.Norwegian School of Veterinary Science (NVH)http://www.elsevier.com/locate/vetmichb2013ab201

    Guidelines for the use and interpretation of assays for monitoring autophagy

    No full text
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Guidelines for the use and interpretation of assays for monitoring autophagy

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Guidelines for the use and interpretation of assays for monitoring autophagy

    No full text
    corecore