270 research outputs found

    Nintedanib targets KIT D816V neoplastic cells derived from induced pluripotent stem cells of systemic mastocytosis

    Get PDF
    The KIT D816V mutation is found in >80% of patients with systemic mastocytosis (SM) and is key to neoplastic mast cell (MC) expansion and accumulation in affected organs. Therefore, KIT D816V represents a prime therapeutic target for SM. Here, we generated a panel of patient-specific KIT D816V induced pluripotent stem cells (iPSCs) from patients with aggressive SM and mast cell leukemia to develop a patient-specific SM disease model for mechanistic and drug-discovery studies. KIT D816V iPSCs differentiated into neoplastic hematopoietic progenitor cells and MCs with patient-specific phenotypic features, thereby reflecting the heterogeneity of the disease. CRISPR/Cas9n-engineered KIT D816V human embryonic stem cells (ESCs), when differentiated into hematopoietic cells, recapitulated the phenotype observed for KIT D816V iPSC hematopoiesis. KIT D816V causes constitutive activation of the KIT tyrosine kinase receptor, and we exploited our iPSCs and ESCs to investigate new tyrosine kinase inhibitors targeting KIT D816V. Our study identified nintedanib, a US Food and Drug Administration-approved angiokinase inhibitor that targets vascular endothelial growth factor receptor, platelet-derived growth factor receptor, and fibroblast growth factor receptor, as a novel KIT D816V inhibitor. Nintedanib selectively reduced the viability of iPSC-derived KIT D816V hematopoietic progenitor cells and MCs in the nanomolar range. Nintedanib was also active on primary samples of KIT D816V SM patients. Molecular docking studies show that nintedanib binds to the adenosine triphosphate binding pocket of inactive KIT D816V. Our results suggest nintedanib as a new drug candidate for KIT D816V-targeted therapy of advanced SM.Peer reviewe

    Linking microarray reporters with protein functions

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The analysis of microarray experiments requires accurate and up-to-date functional annotation of the microarray reporters to optimize the interpretation of the biological processes involved. Pathway visualization tools are used to connect gene expression data with existing biological pathways by using specific database identifiers that link reporters with elements in the pathways.</p> <p>Results</p> <p>This paper proposes a novel method that aims to improve microarray reporter annotation by BLASTing the original reporter sequences against a species-specific EMBL subset, that was derived from and crosslinked back to the highly curated UniProt database. The resulting alignments were filtered using high quality alignment criteria and further compared with the outcome of a more traditional approach, where reporter sequences were BLASTed against EnsEMBL followed by locating the corresponding protein (UniProt) entry for the high quality hits. Combining the results of both methods resulted in successful annotation of > 58% of all reporter sequences with UniProt IDs on two commercial array platforms, increasing the amount of Incyte reporters that could be coupled to Gene Ontology terms from 32.7% to 58.3% and to a local GenMAPP pathway from 9.6% to 16.7%. For Agilent, 35.3% of the total reporters are now linked towards GO nodes and 7.1% on local pathways.</p> <p>Conclusion</p> <p>Our methods increased the annotation quality of microarray reporter sequences and allowed us to visualize more reporters using pathway visualization tools. Even in cases where the original reporter annotation showed the correct description the new identifiers often allowed improved pathway and Gene Ontology linking. These methods are freely available at http://www.bigcat.unimaas.nl/public/publications/Gaj_Annotation/.</p

    Corporate philanthropy, political influence, and health policy

    Get PDF
    Background The Framework Convention of Tobacco Control (FCTC) provides a basis for nation states to limit the political effects of tobacco industry philanthropy, yet progress in this area is limited. This paper aims to integrate the findings of previous studies on tobacco industry philanthropy with a new analysis of British American Tobacco's (BAT) record of charitable giving to develop a general model of corporate political philanthropy that can be used to facilitate implementation of the FCTC. Method Analysis of previously confidential industry documents, BAT social and stakeholder dialogue reports, and existing tobacco industry document studies on philanthropy. Results The analysis identified six broad ways in which tobacco companies have used philanthropy politically: developing constituencies to build support for policy positions and generate third party advocacy; weakening opposing political constituencies; facilitating access and building relationships with policymakers; creating direct leverage with policymakers by providing financial subsidies to specific projects; enhancing the donor's status as a source of credible information; and shaping the tobacco control agenda by shifting thinking on the importance of regulating the market environment for tobacco and the relative risks of smoking for population health. Contemporary BAT social and stakeholder reports contain numerous examples of charitable donations that are likely to be designed to shape the tobacco control agenda, secure access and build constituencies. Conclusions and Recommendations Tobacco companies' political use of charitable donations underlines the need for tobacco industry philanthropy to be restricted via full implementation of Articles 5.3 and 13 of the FCTC. The model of tobacco industry philanthropy developed in this study can be used by public health advocates to press for implementation of the FCTC and provides a basis for analysing the political effects of charitable giving in other industry sectors which have an impact on public health such as alcohol and food

    Distribution of immunodeficiency fact files with XML – from Web to WAP

    Get PDF
    BACKGROUND: Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. METHODS: Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML), that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. RESULTS: IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR) available at . A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. CONCLUSION: The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pTβ‰₯20 GeV and pseudorapidities {pipe}Ξ·{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}Ξ·{pipe}<0. 8) for jets with 60≀pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≀{pipe}Ξ·{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. Β© 2013 CERN for the benefit of the ATLAS collaboration

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    A Novel Side-Chain Orientation Dependent Potential Derived from Random-Walk Reference State for Protein Fold Selection and Structure Prediction

    Get PDF
    An accurate potential function is essential to attack protein folding and structure prediction problems. The key to developing efficient knowledge-based potential functions is to design reference states that can appropriately counteract generic interactions. The reference states of many knowledge-based distance-dependent atomic potential functions were derived from non-interacting particles such as ideal gas, however, which ignored the inherent sequence connectivity and entropic elasticity of proteins.We developed a new pair-wise distance-dependent, atomic statistical potential function (RW), using an ideal random-walk chain as reference state, which was optimized on CASP models and then benchmarked on nine structural decoy sets. Second, we incorporated a new side-chain orientation-dependent energy term into RW (RWplus) and found that the side-chain packing orientation specificity can further improve the decoy recognition ability of the statistical potential.RW and RWplus demonstrate a significantly better ability than the best performing pair-wise distance-dependent atomic potential functions in both native and near-native model selections. It has higher energy-RMSD and energy-TM-score correlations compared with other potentials of the same type in real-life structure assembly decoys. When benchmarked with a comprehensive list of publicly available potentials, RW and RWplus shows comparable performance to the state-of-the-art scoring functions, including those combining terms from multiple resources. These data demonstrate the usefulness of random-walk chain as reference states which correctly account for sequence connectivity and entropic elasticity of proteins. It shows potential usefulness in structure recognition and protein folding simulations. The RW and RWplus potentials, as well as the newly generated I-TASSER decoys, are freely available in http://zhanglab.ccmb.med.umich.edu/RW

    Mitochondrial 2,4-dienoyl-CoA Reductase Deficiency in Mice Results in Severe Hypoglycemia with Stress Intolerance and Unimpaired Ketogenesis

    Get PDF
    The mitochondrial Ξ²-oxidation system is one of the central metabolic pathways of energy metabolism in mammals. Enzyme defects in this pathway cause fatty acid oxidation disorders. To elucidate the role of 2,4-dienoyl-CoA reductase (DECR) as an auxiliary enzyme in the mitochondrial Ξ²-oxidation of unsaturated fatty acids, we created a DECR–deficient mouse line. In Decrβˆ’/βˆ’ mice, the mitochondrial Ξ²-oxidation of unsaturated fatty acids with double bonds is expected to halt at the level of trans-2, cis/trans-4-dienoyl-CoA intermediates. In line with this expectation, fasted Decrβˆ’/βˆ’ mice displayed increased serum acylcarnitines, especially decadienoylcarnitine, a product of the incomplete oxidation of linoleic acid (C18:2), urinary excretion of unsaturated dicarboxylic acids, and hepatic steatosis, wherein unsaturated fatty acids accumulate in liver triacylglycerols. Metabolically challenged Decrβˆ’/βˆ’ mice turned on ketogenesis, but unexpectedly developed hypoglycemia. Induced expression of peroxisomal Ξ²-oxidation and microsomal Ο‰-oxidation enzymes reflect the increased lipid load, whereas reduced mRNA levels of PGC-1Ξ± and CREB, as well as enzymes in the gluconeogenetic pathway, can contribute to stress-induced hypoglycemia. Furthermore, the thermogenic response was perturbed, as demonstrated by intolerance to acute cold exposure. This study highlights the necessity of DECR and the breakdown of unsaturated fatty acids in the transition of intermediary metabolism from the fed to the fasted state

    Can Survival Prediction Be Improved By Merging Gene Expression Data Sets?

    Get PDF
    BACKGROUND:High-throughput gene expression profiling technologies generating a wealth of data, are increasingly used for characterization of tumor biopsies for clinical trials. By applying machine learning algorithms to such clinically documented data sets, one hopes to improve tumor diagnosis, prognosis, as well as prediction of treatment response. However, the limited number of patients enrolled in a single trial study limits the power of machine learning approaches due to over-fitting. One could partially overcome this limitation by merging data from different studies. Nevertheless, such data sets differ from each other with regard to technical biases, patient selection criteria and follow-up treatment. It is therefore not clear at all whether the advantage of increased sample size outweighs the disadvantage of higher heterogeneity of merged data sets. Here, we present a systematic study to answer this question specifically for breast cancer data sets. We use survival prediction based on Cox regression as an assay to measure the added value of merged data sets. RESULTS:Using time-dependent Receiver Operating Characteristic-Area Under the Curve (ROC-AUC) and hazard ratio as performance measures, we see in overall no significant improvement or deterioration of survival prediction with merged data sets as compared to individual data sets. This apparently was due to the fact that a few genes with strong prognostic power were not available on all microarray platforms and thus were not retained in the merged data sets. Surprisingly, we found that the overall best performance was achieved with a single-gene predictor consisting of CYB5D1. CONCLUSIONS:Merging did not deteriorate performance on average despite (a) The diversity of microarray platforms used. (b) The heterogeneity of patients cohorts. (c) The heterogeneity of breast cancer disease. (d) Substantial variation of time to death or relapse. (e) The reduced number of genes in the merged data sets. Predictors derived from the merged data sets were more robust, consistent and reproducible across microarray platforms. Moreover, merging data sets from different studies helps to better understand the biases of individual studies and can lead to the identification of strong survival factors like CYB5D1 expression
    • …
    corecore