11 research outputs found

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series

    Transcriptomics in Toxicogenomics, Part II : Preprocessing and Differential Expression Analysis for High Quality Data

    Get PDF
    Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III : Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series

    Entropic Ranks: A Methodology for Enhanced, Threshold-Free, Information-Rich Data Partition and Interpretation

    No full text
    Background: Here, we propose a threshold-free selection method for the identification of differentially expressed features based on robust, non-parametric statistics, ensuring independence from the statistical distribution properties and broad applicability. Such methods could adapt to different initial data distributions, contrary to statistical techniques, based on fixed thresholds. This work aims to propose a methodology, which automates and standardizes the statistical selection, through the utilization of established measures like that of entropy, already used in information retrieval from large biomedical datasets, thus departing from classical fixed-threshold based methods, relying in arbitrary p-value and fold change values as selection criteria, whose efficacy also depends on degree of conformity to parametric distributions,. Methods: Our work extends the rank product (RP) methodology with a neutral selection method of high information-extraction capacity. We introduce the calculation of the RP entropy of the distribution, to isolate the features of interest by their contribution to its information content. Goal is a methodology of threshold-free identification of the differentially expressed features, which are highly informative about the phenomenon under study. Conclusions: Applying the proposed method on microarray (transcriptomic and DNA methylation) and RNAseq count data of varying sizes and noise presence, we observe robust convergence for the different parameterizations to stable cutoff points. Functional analysis through BioInfoMiner and EnrichR was used to evaluate the information potency of the resulting feature lists. Overall, the derived functional terms provide a systemic description highly compatible with the results of traditional statistical hypothesis testing techniques. The methodology behaves consistently across different data types. The feature lists are compact and rich in information, indicating phenotypic aspects specific to the tissue and biological phenomenon investigated. Selection by information content measures efficiently addresses problems, emerging from arbitrary thresh-holding, thus facilitating the full automation of the analysis

    Can an InChI for Nano Address the Need for a Simplified Representation of Complex Nanomaterials across Experimental and Nanoinformatics Studies?

    No full text
    Chemoinformatics has developed efficient ways of representing chemical structures for small molecules as simple text strings, simplified molecular-input line-entry system (SMILES) and the IUPAC International Chemical Identifier (InChI), which are machine-readable. In particular, InChIs have been extended to encode formalized representations of mixtures and reactions, and work is ongoing to represent polymers and other macromolecules in this way. The next frontier is encoding the multi-component structures of nanomaterials (NMs) in a machine-readable format to enable linking of datasets for nanoinformatics and regulatory applications. A workshop organized by the H2020 research infrastructure NanoCommons and the nanoinformatics project NanoSolveIT analyzed issues involved in developing an InChI for NMs (NInChI). The layers needed to capture NM structures include but are not limited to: core composition (possibly multi-layered); surface topography; surface coatings or functionalization; doping with other chemicals; and representation of impurities. NM distributions (size, shape, composition, surface properties, etc.), types of chemical linkages connecting surface functionalization and coating molecules to the core, and various crystallographic forms exhibited by NMs also need to be considered. Six case studies were conducted to elucidate requirements for unambiguous description of NMs. The suggested NInChI layers are intended to stimulate further analysis that will lead to the first version of a "nano" extension to the InChI standard

    Can an InChI for nano address the need for a simplified representation of complex nanomaterials across experimental and nanoinformatics studies?

    Get PDF
    Chemoinformatics has developed efficient ways of representing chemical structures for small molecules as simple text strings, simplified molecular-input line-entry system (SMILES) and the IUPAC International Chemical Identifier (InChI), which are machine-readable. In particular, InChIs have been extended to encode formalized representations of mixtures and reactions, and work is ongoing to represent polymers and other macromolecules in this way. The next frontier is encoding the multi-component structures of nanomaterials (NMs) in a machine-readable format to enable linking of datasets for nanoinformatics and regulatory applications. A workshop organized by the H2020 research infrastructure NanoCommons and the nanoinformatics project NanoSolveIT analyzed issues involved in developing an InChI for NMs (NInChI). The layers needed to capture NM structures include but are not limited to: core composition (possibly multi-layered); surface topography; surface coatings or functionalization; doping with other chemicals; and representation of impurities. NM distributions (size, shape, composition, surface properties, etc.), types of chemical linkages connecting surface functionalization and coating molecules to the core, and various crystallographic forms exhibited by NMs also need to be considered. Six case studies were conducted to elucidate requirements for unambiguous description of NMs. The suggested NInChI layers are intended to stimulate further analysis that will lead to the first version of a “nano” extension to the InChI standar
    corecore