47 research outputs found

    Prediction of human drug-induced liver injury (DILI) in relation to oral doses and blood concentrations

    Get PDF
    Drug-induced liver injury (DILI) cannot be accurately predicted by animal models. In addition, currently available in vitro methods do not allow for the estimation of hepatotoxic doses or the determination of an acceptable daily intake (ADI). To overcome this limitation, an in vitro/in silico method was established that predicts the risk of human DILI in relation to oral doses and blood concentrations. This method can be used to estimate DILI risk if the maximal blood concentration (Cmax) of the test compound is known. Moreover, an ADI can be estimated even for compounds without information on blood concentrations. To systematically optimize the in vitro system, two novel test performance metrics were introduced, the toxicity separation index (TSI) which quantifies how well a test differentiates between hepatotoxic and non-hepatotoxic compounds, and the toxicity estimation index (TEI) which measures how well hepatotoxic blood concentrations in vivo can be estimated. In vitro test performance was optimized for a training set of 28 compounds, based on TSI and TEI, demonstrating that (1) concentrations where cytotoxicity first becomes evident in vitro (EC10) yielded better metrics than higher toxicity thresholds (EC50); (2) compound incubation for 48 h was better than 24 h, with no further improvement of TSI after 7 days incubation; (3) metrics were moderately improved by adding gene expression to the test battery; (4) evaluation of pharmacokinetic parameters demonstrated that total blood compound concentrations and the 95%-population-based percentile of Cmax were best suited to estimate human toxicity. With a support vector machine-based classifier, using EC10 and Cmax as variables, the cross-validated sensitivity, specificity and accuracy for hepatotoxicity prediction were 100, 88 and 93%, respectively. Concentrations in the culture medium allowed extrapolation to blood concentrations in vivo that are associated with a specific probability of hepatotoxicity and the corresponding oral doses were obtained by reverse modeling. Application of this in vitro/in silico method to the rat hepatotoxicant pulegone resulted in an ADI that was similar to values previously established based on animal experiments. In conclusion, the proposed method links oral doses and blood concentrations of test compounds to the probability of hepatotoxicity

    Integrating Computational Biology and Forward Genetics in Drosophila

    Get PDF
    Genetic screens are powerful methods for the discovery of gene–phenotype associations. However, a systems biology approach to genetics must leverage the massive amount of “omics” data to enhance the power and speed of functional gene discovery in vivo. Thus far, few computational methods for gene function prediction have been rigorously tested for their performance on a genome-wide scale in vivo. In this work, we demonstrate that integrating genome-wide computational gene prioritization with large-scale genetic screening is a powerful tool for functional gene discovery. To discover genes involved in neural development in Drosophila, we extend our strategy for the prioritization of human candidate disease genes to functional prioritization in Drosophila. We then integrate this prioritization strategy with a large-scale genetic screen for interactors of the proneural transcription factor Atonal using genomic deficiencies and mutant and RNAi collections. Using the prioritized genes validated in our genetic screen, we describe a novel genetic interaction network for Atonal. Lastly, we prioritize the whole Drosophila genome and identify candidate gene associations for ten receptor-signaling pathways. This novel database of prioritized pathway candidates, as well as a web application for functional prioritization in Drosophila, called Endeavour-HighFly, and the Atonal network, are publicly available resources. A systems genetics approach that combines the power of computational predictions with in vivo genetic screens strongly enhances the process of gene function and gene–gene association discovery

    LLM3D: a log-linear modeling-based method to predict functional gene regulatory interactions from genome-wide expression data

    Get PDF
    All cellular processes are regulated by condition-specific and time-dependent interactions between transcription factors and their target genes. While in simple organisms, e.g. bacteria and yeast, a large amount of experimental data is available to support functional transcription regulatory interactions, in mammalian systems reconstruction of gene regulatory networks still heavily depends on the accurate prediction of transcription factor binding sites. Here, we present a new method, log-linear modeling of 3D contingency tables (LLM3D), to predict functional transcription factor binding sites. LLM3D combines gene expression data, gene ontology annotation and computationally predicted transcription factor binding sites in a single statistical analysis, and offers a methodological improvement over existing enrichment-based methods. We show that LLM3D successfully identifies novel transcriptional regulators of the yeast metabolic cycle, and correctly predicts key regulators of mouse embryonic stem cell self-renewal more accurately than existing enrichment-based methods. Moreover, in a clinically relevant in vivo injury model of mammalian neurons, LLM3D identified peroxisome proliferator-activated receptor γ (PPARγ) as a neuron-intrinsic transcriptional regulator of regenerative axon growth. In conclusion, LLM3D provides a significant improvement over existing methods in predicting functional transcription regulatory interactions in the absence of experimental transcription factor binding data

    L2-norm multiple kernel learning and its application to biomedical data fusion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as <it>L</it><sub>∞</sub>, <it>L</it><sub>1</sub>, and <it>L</it><sub>2 </sub>MKL. In particular, <it>L</it><sub>2 </sub>MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing <it>L</it><sub>∞ </sub>MKL method. In real biomedical applications, <it>L</it><sub>2 </sub>MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources.</p> <p>Results</p> <p>We provide a theoretical analysis of the relationship between the <it>L</it><sub>2 </sub>optimization of kernels in the dual problem with the <it>L</it><sub>2 </sub>coefficient regularization in the primal problem. Understanding the dual <it>L</it><sub>2 </sub>problem grants a unified view on MKL and enables us to extend the <it>L</it><sub>2 </sub>method to a wide range of machine learning problems. We implement <it>L</it><sub>2 </sub>MKL for ranking and classification problems and compare its performance with the sparse <it>L</it><sub>∞ </sub>and the averaging <it>L</it><sub>1 </sub>MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. <it>L</it><sub>2 </sub>MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel <it>L</it><sub>2 </sub>MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing.</p> <p>Conclusions</p> <p>This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in <it>L</it><sub>∞ </sub>MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing <it>L</it><sub>2 </sub>kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL.</p> <p>Availability</p> <p>The MATLAB code of algorithms implemented in this paper is downloadable from <url>http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html</url>.</p

    How do Regulatory T Cells Work?

    Get PDF
    CD4+ T cells are commonly divided into regulatory T (Treg) cells and conventional T helper (Th) cells. Th cells control adaptive immunity against pathogens and cancer by activating other effector immune cells. Treg cells are defined as CD4+ T cells in charge of suppressing potentially deleterious activities of Th cells. This review briefly summarizes the current knowledge in the Treg field and defines some key questions that remain to be answered. Suggested functions for Treg cells include: prevention of autoimmune diseases by maintaining self-tolerance; suppression of allergy, asthma and pathogen-induced immunopathology; feto-maternal tolerance; and oral tolerance. Identification of Treg cells remains problematic, because accumulating evidence suggests that all the presently-used Treg markers (CD25, CTLA-4, GITR, LAG-3, CD127 and Foxp3) represent general T-cell activation markers, rather than being truly Treg-specific. Treg-cell activation is antigen-specific, which implies that suppressive activities of Treg cells are antigen-dependent. It has been proposed that Treg cells would be self-reactive, but extensive TCR repertoire analysis suggests that self-reactivity may be the exception rather than the rule. The classification of Treg cells as a separate lineage remains controversial because the ability to suppress is not an exclusive Treg property. Suppressive activities attributed to Treg cells may in reality, at least in some experimental settings, be exerted by conventional Th cell subsets, such as Th1, Th2, Th17 and T follicular (Tfh) cells. Recent reports have also demonstrated that Foxp3+ Treg cells may differentiate in vivo into conventional effector Th cells, with or without concomitant downregulation of Foxp3

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Recreational use of nitrous oxide : a growing concern for Europe

    No full text
    The purpose of this report is to examine the current situation, risks and responses to the recreational use of nitrous oxide in Europe. To support this, the report also provides a state-of-the-art review of the chemistry, pharmacology and toxicology of the gas. It is intended for policymakers and practitioners

    Preclinical and clinical safety studies on DNA vaccines.

    Get PDF
    DNA vaccines are based on the transfer of genetic material, encoding an antigen, to the cells of the vaccine recipient. Despite high expectations of DNA vaccines as a result of promising preclinical data their clinical utility remains unproven. However, much data is gathered in preclinical and clinical studies about the safety of DNA vaccines. Here we review current knowledge about the safety of DNA vaccines. Safety concerns of DNA vaccines relate to genetic, immunologic, toxic, and environmental effects. In this review we provide an overview of findings related to the safety of DNA vaccines, obtained so far. We conclude that the potential risks of DNA vaccines are minimal. However, their safety issues may differ case-by-case, and they should be treated accordingly
    corecore