135 research outputs found

    Genome based cell population heterogeneity promotes tumorigenicity: the evolutionary mechanism of cancer.

    Get PDF
    Cancer progression represents an evolutionary process where overall genome level changes reflect system instability and serve as a driving force for evolving new systems. To illustrate this principle it must be demonstrated that karyotypic heterogeneity (population diversity) directly contributes to tumorigenicity. Five well characterized in vitro tumor progression models representing various types of cancers were selected for such an analysis. The tumorigenicity of each model has been linked to different molecular pathways, and there is no common molecular mechanism shared among them. According to our hypothesis that genome level heterogeneity is a key to cancer evolution, we expect to reveal that the common link of tumorigenicity between these diverse models is elevated genome diversity. Spectral karyotyping (SKY) was used to compare the degree of karyotypic heterogeneity displayed in various sublines of these five models. The cell population diversity was determined by scoring type and frequencies of clonal and non-clonal chromosome aberrations (CCAs and NCCAs). The tumorigenicity of these models has been separately analyzed. As expected, the highest level of NCCAs was detected coupled with the strongest tumorigenicity among all models analyzed. The karyotypic heterogeneity of both benign hyperplastic lesions and premalignant dysplastic tissues were further analyzed to support this conclusion. This common link between elevated NCCAs and increased tumorigenicity suggests an evolutionary causative relationship between system instability, population diversity, and cancer evolution. This study reconciles the difference between evolutionary and molecular mechanisms of cancer and suggests that NCCAs can serve as a biomarker to monitor the probability of cancer progression

    World Health Organization cardiovascular disease risk charts: revised models to estimate risk in 21 global regions

    Get PDF
    BACKGROUND: To help adapt cardiovascular disease risk prediction approaches to low-income and middle-income countries, WHO has convened an effort to develop, evaluate, and illustrate revised risk models. Here, we report the derivation, validation, and illustration of the revised WHO cardiovascular disease risk prediction charts that have been adapted to the circumstances of 21 global regions. METHODS: In this model revision initiative, we derived 10-year risk prediction models for fatal and non-fatal cardiovascular disease (ie, myocardial infarction and stroke) using individual participant data from the Emerging Risk Factors Collaboration. Models included information on age, smoking status, systolic blood pressure, history of diabetes, and total cholesterol. For derivation, we included participants aged 40-80 years without a known baseline history of cardiovascular disease, who were followed up until the first myocardial infarction, fatal coronary heart disease, or stroke event. We recalibrated models using age-specific and sex-specific incidences and risk factor values available from 21 global regions. For external validation, we analysed individual participant data from studies distinct from those used in model derivation. We illustrated models by analysing data on a further 123 743 individuals from surveys in 79 countries collected with the WHO STEPwise Approach to Surveillance. FINDINGS: Our risk model derivation involved 376 177 individuals from 85 cohorts, and 19 333 incident cardiovascular events recorded during 10 years of follow-up. The derived risk prediction models discriminated well in external validation cohorts (19 cohorts, 1 096 061 individuals, 25 950 cardiovascular disease events), with Harrell's C indices ranging from 0·685 (95% CI 0·629-0·741) to 0·833 (0·783-0·882). For a given risk factor profile, we found substantial variation across global regions in the estimated 10-year predicted risk. For example, estimated cardiovascular disease risk for a 60-year-old male smoker without diabetes and with systolic blood pressure of 140 mm Hg and total cholesterol of 5 mmol/L ranged from 11% in Andean Latin America to 30% in central Asia. When applied to data from 79 countries (mostly low-income and middle-income countries), the proportion of individuals aged 40-64 years estimated to be at greater than 20% risk ranged from less than 1% in Uganda to more than 16% in Egypt. INTERPRETATION: We have derived, calibrated, and validated new WHO risk prediction models to estimate cardiovascular disease risk in 21 Global Burden of Disease regions. The widespread use of these models could enhance the accuracy, practicability, and sustainability of efforts to reduce the burden of cardiovascular disease worldwide. FUNDING: World Health Organization, British Heart Foundation (BHF), BHF Cambridge Centre for Research Excellence, UK Medical Research Council, and National Institute for Health Research

    Sensitivity of the Advanced LIGO detectors at the beginning of gravitational wave astronomy

    Get PDF
    The Laser Interferometer Gravitational Wave Observatory (LIGO) consists of two widely separated 4 km laser interferometers designed to detect gravitational waves from distant astrophysical sources in the frequency range from 10 Hz to 10 kHz. The first observation run of the Advanced LIGO detectors started in September 2015 and ended in January 2016. A strain sensitivity of better than 10−23/Hz−−−√ was achieved around 100 Hz. Understanding both the fundamental and the technical noise sources was critical for increasing the astrophysical strain sensitivity. The average distance at which coalescing binary black hole systems with individual masses of 30  M⊙ could be detected above a signal-to-noise ratio (SNR) of 8 was 1.3 Gpc, and the range for binary neutron star inspirals was about 75 Mpc. With respect to the initial detectors, the observable volume of the Universe increased by a factor 69 and 43, respectively. These improvements helped Advanced LIGO to detect the gravitational wave signal from the binary black hole coalescence, known as GW150914

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    Global economic burden of unmet surgical need for appendicitis

    Get PDF
    Background: There is a substantial gap in provision of adequate surgical care in many low-and middle-income countries. This study aimed to identify the economic burden of unmet surgical need for the common condition of appendicitis. Methods: Data on the incidence of appendicitis from 170 countries and two different approaches were used to estimate numbers of patients who do not receive surgery: as a fixed proportion of the total unmet surgical need per country (approach 1); and based on country income status (approach 2). Indirect costs with current levels of access and local quality, and those if quality were at the standards of high-income countries, were estimated. A human capital approach was applied, focusing on the economic burden resulting from premature death and absenteeism. Results: Excess mortality was 4185 per 100 000 cases of appendicitis using approach 1 and 3448 per 100 000 using approach 2. The economic burden of continuing current levels of access and local quality was US 92492millionusingapproach1and92 492 million using approach 1 and 73 141 million using approach 2. The economic burden of not providing surgical care to the standards of high-income countries was 95004millionusingapproach1and95 004 million using approach 1 and 75 666 million using approach 2. The largest share of these costs resulted from premature death (97.7 per cent) and lack of access (97.0 per cent) in contrast to lack of quality. Conclusion: For a comparatively non-complex emergency condition such as appendicitis, increasing access to care should be prioritized. Although improving quality of care should not be neglected, increasing provision of care at current standards could reduce societal costs substantially

    Effects of Data Quality Vetoes on a Search for Compact Binary Coalescences in Advanced LIGO's First Observing Run

    Get PDF
    The first observing run of Advanced LIGO spanned 4 months, from September 12, 2015 to January 19, 2016, during which gravitational waves were directly detected from two binary black hole systems, namely GW150914 and GW151226. Confident detection of gravitational waves requires an understanding of instrumental transients and artifacts that can reduce the sensitivity of a search. Studies of the quality of the detector data yield insights into the cause of instrumental artifacts and data quality vetoes specific to a search are produced to mitigate the effects of problematic data. In this paper, the systematic removal of noisy data from analysis time is shown to improve the sensitivity of searches for compact binary coalescences. The output of the PyCBC pipeline, which is a python-based code package used to search for gravitational wave signals from compact binary coalescences, is used as a metric for improvement. GW150914 was a loud enough signal that removing noisy data did not improve its significance. However, the removal of data with excess noise decreased the false alarm rate of GW151226 by more than two orders of magnitude, from 1 in 770 years to less than 1 in 186000 years.Comment: 27 pages, 13 figures, published versio

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNetÂź convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNetÂź model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery
    • 

    corecore