11 research outputs found

    Application of transfer learning to predict drug-induced human in vivo gene expression changes using rat in vitro and in vivo data

    Get PDF
    The liver is the primary site for the metabolism and detoxification of many compounds, including pharmaceuticals. Consequently, it is also the primary location for many adverse reactions. As the liver is not readily accessible for sampling in humans; rodent or cell line models are often used to evaluate potential toxic effects of a novel compound or candidate drug. However, relating the results of animal and in vitro studies to relevant clinical outcomes for the human in vivo situation still proves challenging. In this study, we incorporate principles of transfer learning within a deep artificial neural network allowing us to leverage the relative abundance of rat in vitro and in vivo exposure data from the Open TG-GATEs data set to train a model to predict the expected pattern of human in vivo gene expression following an exposure given measured human in vitro gene expression. We show that domain adaptation has been successfully achieved, with the rat and human in vitro data no longer being separable in the common latent space generated by the network. The network produces physiologically plausible predictions of human in vivo gene expression pattern following an exposure to a previously unseen compound. Moreover, we show the integration of the human in vitro data in the training of the domain adaptation network significantly improves the temporal accuracy of the predicted rat in vivo gene expression pattern following an exposure to a previously unseen compound. In this way, we demonstrate the improvements in prediction accuracy that can be achieved by combining data from distinct domains.</p

    Application of transfer learning to predict drug-induced human in vivo gene expression changes using rat in vitro and in vivo data

    Get PDF
    The liver is the primary site for the metabolism and detoxification of many compounds, including pharmaceuticals. Consequently, it is also the primary location for many adverse reactions. As the liver is not readily accessible for sampling in humans; rodent or cell line models are often used to evaluate potential toxic effects of a novel compound or candidate drug. However, relating the results of animal and in vitro studies to relevant clinical outcomes for the human in vivo situation still proves challenging. In this study, we incorporate principles of transfer learning within a deep artificial neural network allowing us to leverage the relative abundance of rat in vitro and in vivo exposure data from the Open TG-GATEs data set to train a model to predict the expected pattern of human in vivo gene expression following an exposure given measured human in vitro gene expression. We show that domain adaptation has been successfully achieved, with the rat and human in vitro data no longer being separable in the common latent space generated by the network. The network produces physiologically plausible predictions of human in vivo gene expression pattern following an exposure to a previously unseen compound. Moreover, we show the integration of the human in vitro data in the training of the domain adaptation network significantly improves the temporal accuracy of the predicted rat in vivo gene expression pattern following an exposure to a previously unseen compound. In this way, we demonstrate the improvements in prediction accuracy that can be achieved by combining data from distinct domains.</p

    diXa: a data infrastructure for chemical safety assessment

    Get PDF
    Motivation: The field of toxicogenomics (the application of ‘-omics' technologies to risk assessment of compound toxicities) has expanded in the last decade, partly driven by new legislation, aimed at reducing animal testing in chemical risk assessment but mainly as a result of a paradigm change in toxicology towards the use and integration of genome wide data. Many research groups worldwide have generated large amounts of such toxicogenomics data. However, there is no centralized repository for archiving and making these data and associated tools for their analysis easily available. Results: The Data Infrastructure for Chemical Safety Assessment (diXa) is a robust and sustainable infrastructure storing toxicogenomics data. A central data warehouse is connected to a portal with links to chemical information and molecular and phenotype data. diXa is publicly available through a user-friendly web interface. New data can be readily deposited into diXa using guidelines and templates available online. Analysis descriptions and tools for interrogating the data are available via the diXa portal. Availability and implementation: http://www.dixa-fp7.eu Contact: [email protected]; [email protected] Supplementary information: Supplementary data are available at Bioinformatics onlin

    Report of the First ONTOX Stakeholder Network Meeting: Digging Under the Surface of ONTOX Together With the Stakeholders

    Get PDF
    The first Stakeholder Network Meeting of the EU Horizon 2020-funded ONTOX project was held on 13–14 March 2023, in Brussels, Belgium. The discussion centred around identifying specific challenges, barriers and drivers in relation to the implementation of non-animal new approach methodologies (NAMs) and probabilistic risk assessment (PRA), in order to help address the issues and rank them according to their associated level of difficulty. ONTOX aims to advance the assessment of chemical risk to humans, without the use of animal testing, by developing non-animal NAMs and PRA in line with 21st century toxicity testing principles. Stakeholder groups (regulatory authorities, companies, academia, non-governmental organisations) were identified and invited to participate in a meeting and a survey, by which their current position in relation to the implementation of NAMs and PRA was ascertained, as well as specific challenges and drivers highlighted. The survey analysis revealed areas of agreement and disagreement among stakeholders on topics such as capacity building, sustainability, regulatory acceptance, validation of adverse outcome pathways, acceptance of artificial intelligence (AI) in risk assessment, and guaranteeing consumer safety. The stakeholder network meeting resulted in the identification of barriers, drivers and specific challenges that need to be addressed. Breakout groups discussed topics such as hazard versus risk assessment, future reliance on AI and machine learning, regulatory requirements for industry and sustainability of the ONTOX Hub platform. The outputs from these discussions provided insights for overcoming barriers and leveraging drivers for implementing NAMs and PRA. It was concluded that there is a continued need for stakeholder engagement, including the organisation of a ‘hackathon’ to tackle challenges, to ensure the successful implementation of NAMs and PRA in chemical risk assessment

    Toxicogenomics and Toxicoinformatics: Supporting Systems Biology in the Big Data Era

    No full text
    Within Toxicology, Toxicogenomics stands out as a unique research field aiming at the investigation of molecular alterations induced by chemical exposure. Toxicogenomics comprises a wide range of technologies developed to measure and quantify the '-omes (transcriptome, (epi)genome, proteome and metalobome), offering a human-based approach in contrast to traditional animal-based toxicity testing. With the growing acceptance and continuous improvements in high-throughput technologies, we observed a fast increase in the generation of 'omics outputs. As a result, Toxicogenomics entered a new, challenging era facing the characteristic 4 Vs of Big Data: volume, velocity, variety and veracity. This chapter addresses these challenges by focusing on computational methods and Toxicoinformatics in the scope of Big 'omics Data. First, we provide an overview of current technologies and the steps involved in storage, pre-processing and integration of high-throughput datasets, describing databases, standard pipelines and routinely used tools. We show how data mining, pattern recognition and mechanistic/pathway analyses contribute to elucidate mechanisms of adverse effects to build knowledge in Systems Toxicology. Finally, we present the recent progress in tackling current computational and biological limitations. Throughout the chapter, we also provide relevant examples of successful applications of Toxicoinformatics in predicting toxicity in the Big Data era

    diXa: a Data Infrastructure for Chemical Safety Assessment

    No full text
    Motivation: The field of toxicogenomics (the application of ‘-omics’ technologies to risk assessment of compound toxicities) has expanded in the last decade, partly driven by new legislation, aimed at reducing animal testing in chemical risk assessment but mainly as a result of a paradigm change in toxicology towards the use and integration of genome wide data. Many research groups worldwide have generated large amounts of such toxicogenomics data. However, there is no centralized repository for archiving and making these data and associated tools for their analysis easily available. Results: The Data Infrastructure for Chemical Safety Assessment (diXa) is a robust and sustainable infrastructure storing toxicogenomics data. A central data warehouse is connected to a portal with links to chemical information and molecular and phenotype data. diXa is publicly available through a user-friendly web interface. New data can be readily deposited into diXa using guidelines and templates available online. Analysis descriptions and tools for interrogating the data are available via the diXa portal.JRC.I.5-Systems Toxicolog
    corecore