31 research outputs found
Modifiable risk factors and overall cardiovascular mortality: Moderation of urbanization
Background: Modifiable risk factors are associated with cardiovascular mortality (CVM) which is a leading form of global mortality. However, diverse nature of urbanization and its objective measurement can modify their relationship. This study aims to investigate the moderating role of urbanization in the relationship of combined exposure (CE) of modifiable risk factors and CVM. Design and Methods: This is the first comprehensive study which considers different forms of urbanization to gauge its manifold impact. Therefore, in addition to existing original quantitative form and traditional two categories of urbanization, a new form consisted of four levels of urbanization was duly introduced. This study used data of 129 countries mainly retrieved from a WHO report, Non-Communicable Diseases Country Profile 2014. Factor scores obtained through confirmatory factor analysis were used to compute the CE. Age-income adjusted regression model for CVM was tested as a baseline with three bootstrap regression models developed for the three forms of urbanization.Results: Results revealed that the CE and CVM baseline relationship was significantly moderated through the original quantitative form of urbanization. Contrarily, the two traditional categories of urbanization could not capture the moderating impact. However, the four levels of urbanization were objectively estimated the urbanization impact and subsequently indicated that the CE was more alarming in causing the CVM in levels 2 and 3 urbanized countries, mainly from low-middle-income countries.Conclusion: This study concluded that the urbanization is a strong moderator and it could be gauged effectively through four levels whereas sufficiency of two traditional categories of urbanization is questionable
Exploration of black boxes of supervised machine learning models: A demonstration on development of predictive heart risk score
Machine learning (ML) often provides applicable high-performance models to facilitate decision-makers in various fields. However, this high performance is achieved at the expense of the interpretability of these models, which has been criticized by practitioners and has become a significant hindrance in their application. Therefore, in highly sensitive decisions, black boxes of ML models are not recommended. We proposed a novel methodology that uses complex supervised ML models and transforms them into simple, interpretable, transparent statistical models. This methodology is like stacking ensemble ML in which the best ML models are used as a base learner to compute relative feature weights. The index of these weights is further used as a single covariate in the simple logistic regression model to estimate the likelihood of an event. We tested this methodology on the primary dataset related to cardiovascular diseases (CVDs), the leading cause of mortalities in recent times. Therefore, early risk assessment is an important dimension that can potentially reduce the burden of CVDs and their related mortality through accurate but interpretable risk prediction models. We developed an artificial neural network and support vector machines based on ML models and transformed them into a simple statistical model and heart risk scores. These simplified models were found transparent, reliable, valid, interpretable, and approximate in predictions. The findings of this study suggest that complex supervised ML models can be efficiently transformed into simple statistical models that can also be validated
Reproducibility and reuse of adaptive immune receptor repertoire data
High-throughput sequencing (HTS) of immunoglobulin (B-cell receptor, antibody) and T-cell receptor repertoires has increased dramatically since the technique was introduced in 2009 (1-3). This experimental approach explores the maturation of the adaptive immune system and its response to antigens, pathogens, and disease conditions in exquisite detail. It holds significant promise for diagnostic and therapy-guiding applications. New technology often spreads rapidly, sometimes more rapidly than the understanding of how to make the products of that technology reliable, reproducible, or usable by others. As complex technologies have developed, scientific communities have come together to adopt common standards, protocols, and policies for generating and sharing data sets, such as the MIAME protocols developed for microarray experiments. The Adaptive Immune Receptor Repertoire (AIRR) Community formed in 2015 to address similar issues for HTS data of immune repertoires. The purpose of this perspective is to provide an overview of the AIRR Community\u27s founding principles and present the progress that the AIRR Community has made in developing standards of practice and data sharing protocols. Finally, and most important, we invite all interested parties to join this effort to facilitate sharing and use of these powerful data sets ([email protected])
AIRR Community Standardized Representations for Annotated Immune Repertoires
Increased interest in the immune system's involvement in pathophysiological phenomena coupled with decreased DNA sequencing costs have led to an explosion of antibody and T cell receptor sequencing data collectively termed “adaptive immune receptor repertoire sequencing” (AIRR-seq or Rep-Seq). The AIRR Community has been actively working to standardize protocols, metadata, formats, APIs, and other guidelines to promote open and reproducible studies of the immune repertoire. In this paper, we describe the work of the AIRR Community's Data Representation Working Group to develop standardized data representations for storing and sharing annotated antibody and T cell receptor data. Our file format emphasizes ease-of-use, accessibility, scalability to large data sets, and a commitment to open and transparent science. It is composed of a tab-delimited format with a specific schema. Several popular repertoire analysis tools and data repositories already utilize this AIRR-seq data format. We hope that others will follow suit in the interest of promoting interoperable standards
Basic science232. Certolizumab pegol prevents pro-inflammatory alterations in endothelial cell function
Background: Cardiovascular disease is a major comorbidity of rheumatoid arthritis (RA) and a leading cause of death. Chronic systemic inflammation involving tumour necrosis factor alpha (TNF) could contribute to endothelial activation and atherogenesis. A number of anti-TNF therapies are in current use for the treatment of RA, including certolizumab pegol (CZP), (Cimzia ®; UCB, Belgium). Anti-TNF therapy has been associated with reduced clinical cardiovascular disease risk and ameliorated vascular function in RA patients. However, the specific effects of TNF inhibitors on endothelial cell function are largely unknown. Our aim was to investigate the mechanisms underpinning CZP effects on TNF-activated human endothelial cells. Methods: Human aortic endothelial cells (HAoECs) were cultured in vitro and exposed to a) TNF alone, b) TNF plus CZP, or c) neither agent. Microarray analysis was used to examine the transcriptional profile of cells treated for 6 hrs and quantitative polymerase chain reaction (qPCR) analysed gene expression at 1, 3, 6 and 24 hrs. NF-κB localization and IκB degradation were investigated using immunocytochemistry, high content analysis and western blotting. Flow cytometry was conducted to detect microparticle release from HAoECs. Results: Transcriptional profiling revealed that while TNF alone had strong effects on endothelial gene expression, TNF and CZP in combination produced a global gene expression pattern similar to untreated control. The two most highly up-regulated genes in response to TNF treatment were adhesion molecules E-selectin and VCAM-1 (q 0.2 compared to control; p > 0.05 compared to TNF alone). The NF-κB pathway was confirmed as a downstream target of TNF-induced HAoEC activation, via nuclear translocation of NF-κB and degradation of IκB, effects which were abolished by treatment with CZP. In addition, flow cytometry detected an increased production of endothelial microparticles in TNF-activated HAoECs, which was prevented by treatment with CZP. Conclusions: We have found at a cellular level that a clinically available TNF inhibitor, CZP reduces the expression of adhesion molecule expression, and prevents TNF-induced activation of the NF-κB pathway. Furthermore, CZP prevents the production of microparticles by activated endothelial cells. This could be central to the prevention of inflammatory environments underlying these conditions and measurement of microparticles has potential as a novel prognostic marker for future cardiovascular events in this patient group. Disclosure statement: Y.A. received a research grant from UCB. I.B. received a research grant from UCB. S.H. received a research grant from UCB. All other authors have declared no conflicts of interes
Convalescent plasma in patients admitted to hospital with COVID-19 (RECOVERY): a randomised controlled, open-label, platform trial
SummaryBackground Azithromycin has been proposed as a treatment for COVID-19 on the basis of its immunomodulatoryactions. We aimed to evaluate the safety and efficacy of azithromycin in patients admitted to hospital with COVID-19.Methods In this randomised, controlled, open-label, adaptive platform trial (Randomised Evaluation of COVID-19Therapy [RECOVERY]), several possible treatments were compared with usual care in patients admitted to hospitalwith COVID-19 in the UK. The trial is underway at 176 hospitals in the UK. Eligible and consenting patients wererandomly allocated to either usual standard of care alone or usual standard of care plus azithromycin 500 mg once perday by mouth or intravenously for 10 days or until discharge (or allocation to one of the other RECOVERY treatmentgroups). Patients were assigned via web-based simple (unstratified) randomisation with allocation concealment andwere twice as likely to be randomly assigned to usual care than to any of the active treatment groups. Participants andlocal study staff were not masked to the allocated treatment, but all others involved in the trial were masked to theoutcome data during the trial. The primary outcome was 28-day all-cause mortality, assessed in the intention-to-treatpopulation. The trial is registered with ISRCTN, 50189673, and ClinicalTrials.gov, NCT04381936.Findings Between April 7 and Nov 27, 2020, of 16 442 patients enrolled in the RECOVERY trial, 9433 (57%) wereeligible and 7763 were included in the assessment of azithromycin. The mean age of these study participants was65·3 years (SD 15·7) and approximately a third were women (2944 [38%] of 7763). 2582 patients were randomlyallocated to receive azithromycin and 5181 patients were randomly allocated to usual care alone. Overall,561 (22%) patients allocated to azithromycin and 1162 (22%) patients allocated to usual care died within 28 days(rate ratio 0·97, 95% CI 0·87–1·07; p=0·50). No significant difference was seen in duration of hospital stay (median10 days [IQR 5 to >28] vs 11 days [5 to >28]) or the proportion of patients discharged from hospital alive within 28 days(rate ratio 1·04, 95% CI 0·98–1·10; p=0·19). Among those not on invasive mechanical ventilation at baseline, nosignificant difference was seen in the proportion meeting the composite endpoint of invasive mechanical ventilationor death (risk ratio 0·95, 95% CI 0·87–1·03; p=0·24).Interpretation In patients admitted to hospital with COVID-19, azithromycin did not improve survival or otherprespecified clinical outcomes. Azithromycin use in patients admitted to hospital with COVID-19 should be restrictedto patients in whom there is a clear antimicrobial indication
Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries
Abstract
Background
Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres.
Methods
This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries.
Results
In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia.
Conclusion
This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries
Semantic Web in the Age of Big Data: A Perspective
We are awash with “Big Data” to this very day because of the technological advancements made during the past decade. The notion of Big Data refers to the datasets which are gigantic in size to be processed by conventional databases and management techniques (volume), are extremely diverse so that no single data model can capture all elements of the data (variety) and are produced or gathered at an unprecedented scale (velocity). Because of this sheer volume, variety, and velocity of big data, enterprises are facing data heterogeneity, diversity and
complexity challenges. However, this big data era came with big opportunities by resolving the associated challenges, so it could transform our traditional way of decision-making. Enterprises with the technical expertise of managing big data are now replacing their usual guesswork and laborious legacy data modeling based decision making processes with facts derived from big
data
Semantic Web in the Age of Big Data: A Perspective
We are awash with “Big Data” to this very day because of the technological advancements made during the past decade. The notion of Big Data refers to the datasets which are gigantic in size to be processed by conventional databases and management techniques (volume), are extremely diverse so that no single data model can capture all elements of the data (variety) and are produced or gathered at an unprecedented scale (velocity). Because of this sheer volume, variety, and velocity of big data, enterprises are facing data heterogeneity, diversity and
complexity challenges. However, this big data era came with big opportunities by resolving the associated challenges, so it could transform our traditional way of decision-making. Enterprises with the technical expertise of managing big data are now replacing their usual guesswork and laborious legacy data modeling based decision making processes with facts derived from big
data