17 research outputs found

    Supervised Methods for Biomarker Detection from Microarray Experiments

    Get PDF
    Biomarkers are valuable indicators of the state of a biological system. Microarray technology has been extensively used to identify biomarkers and build computational predictive models for disease prognosis, drug sensitivity and toxicity evaluations. Activation biomarkers can be used to understand the underlying signaling cascades, mechanisms of action and biological cross talk. Biomarker detection from microarray data requires several considerations both from the biological and computational points of view. In this chapter, we describe the main methodology used in biomarkers discovery and predictive modeling and we address some of the related challenges. Moreover, we discuss biomarker validation and give some insights into multiomics strategies for biomarker detection.Non peer reviewe

    Microarray Data Preprocessing: From Experimental Design to Differential Analysis

    Get PDF
    DNA microarray data preprocessing is of utmost importance in the analytical path starting from the experimental design and leading to a reliable biological interpretation. In fact, when all relevant aspects regarding the experimental plan have been considered, the following steps from data quality check to differential analysis will lead to robust, trustworthy results. In this chapter, all the relevant aspects and considerations about microarray preprocessing will be discussed. Preprocessing steps are organized in an orderly manner, from experimental design to quality check and batch effect removal, including the most common visualization methods. Furthermore, we will discuss data representation and differential testing methods with a focus on the most common microarray technologies, such as gene expression and DNA methylation.Peer reviewe

    Characterization of ENM Dynamic Dose-Dependent MOA in Lung with Respect to Immune Cells Infiltration

    Get PDF
    The molecular effects of exposures to engineered nanomaterials (ENMs) are still largely unknown. In classical inhalation toxicology, cell composition of bronchoalveolar lavage (BAL) is a toxicity indicator at the lung tissue level that can aid in interpreting pulmonary histological changes. Toxicogenomic approaches help characterize the mechanism of action (MOA) of ENMs by investigating the differentially expressed genes (DEG). However, dissecting which molecular mechanisms and events are directly induced by the exposure is not straightforward. It is now generally accepted that direct effects follow a monotonic dose-dependent pattern. Here, we applied an integrated modeling approach to study the MOA of four ENMs by retrieving the DEGs that also show a dynamic dose-dependent profile (dddtMOA). We further combined the information of the dddtMOA with the dose dependency of four immune cell populations derived from BAL counts. The dddtMOA analysis highlighted the specific adaptation pattern to each ENM. Furthermore, it revealed the distinct effect of the ENM physicochemical properties on the induced immune response. Finally, we report three genes dose-dependent in all the exposures and correlated with immune deregulation in the lung. The characterization of dddtMOA for ENM exposures, both for apical endpoints and molecular responses, can further promote toxicogenomic approaches in a regulatory context.Peer reviewe

    Characterization of ENM Dynamic Dose-Dependent MOA in Lung with Respect to Immune Cells Infiltration

    Get PDF
    The molecular effects of exposures to engineered nanomaterials (ENMs) are still largely unknown. In classical inhalation toxicology, cell composition of bronchoalveolar lavage (BAL) is a toxicity indicator at the lung tissue level that can aid in interpreting pulmonary histological changes. Toxicogenomic approaches help characterize the mechanism of action (MOA) of ENMs by investigating the differentially expressed genes (DEG). However, dissecting which molecular mechanisms and events are directly induced by the exposure is not straightforward. It is now generally accepted that direct effects follow a monotonic dose-dependent pattern. Here, we applied an integrated modeling approach to study the MOA of four ENMs by retrieving the DEGs that also show a dynamic dose-dependent profile (dddtMOA). We further combined the information of the dddtMOA with the dose dependency of four immune cell populations derived from BAL counts. The dddtMOA analysis highlighted the specific adaptation pattern to each ENM. Furthermore, it revealed the distinct effect of the ENM physicochemical properties on the induced immune response. Finally, we report three genes dose-dependent in all the exposures and correlated with immune deregulation in the lung. The characterization of dddtMOA for ENM exposures, both for apical endpoints and molecular responses, can further promote toxicogenomic approaches in a regulatory context

    Nextcast : A software suite to analyse and model toxicogenomics data

    Get PDF
    The recent advancements in toxicogenomics have led to the availability of large omics data sets, representing the starting point for studying the exposure mechanism of action and identifying candidate biomarkers for toxicity prediction. The current lack of standard methods in data generation and analysis hampers the full exploitation of toxicogenomics-based evidence in regulatory risk assessment. Moreover, the pipelines for the preprocessing and downstream analyses of toxicogenomic data sets can be quite challenging to implement. During the years, we have developed a number of software packages to address specific questions related to multiple steps of toxicogenomics data analysis and modelling. In this review we present the Nextcast software collection and discuss how its individual tools can be combined into efficient pipelines to answer specific biological questions. Nextcast components are of great support to the scientific community for analysing and interpreting large data sets for the toxicity evaluation of compounds in an unbiased, straightforward, and reliable manner. The Nextcast software suite is available at: ( https://github.com/fhaive/nextcast).(c) 2022 The Authors. Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).Peer reviewe

    Transcriptomics in Toxicogenomics, Part II : Preprocessing and Differential Expression Analysis for High Quality Data

    Get PDF
    Preprocessing of transcriptomics data plays a pivotal role in the development of toxicogenomics-driven tools for chemical toxicity assessment. The generation and exploitation of large volumes of molecular profiles, following an appropriate experimental design, allows the employment of toxicogenomics (TGx) approaches for a thorough characterisation of the mechanism of action (MOA) of different compounds. To date, a plethora of data preprocessing methodologies have been suggested. However, in most cases, building the optimal analytical workflow is not straightforward. A careful selection of the right tools must be carried out, since it will affect the downstream analyses and modelling approaches. Transcriptomics data preprocessing spans across multiple steps such as quality check, filtering, normalization, batch effect detection and correction. Currently, there is a lack of standard guidelines for data preprocessing in the TGx field. Defining the optimal tools and procedures to be employed in the transcriptomics data preprocessing will lead to the generation of homogeneous and unbiased data, allowing the development of more reliable, robust and accurate predictive models. In this review, we outline methods for the preprocessing of three main transcriptomic technologies including microarray, bulk RNA-Sequencing (RNA-Seq), and single cell RNA-Sequencing (scRNA-Seq). Moreover, we discuss the most common methods for the identification of differentially expressed genes and to perform a functional enrichment analysis. This review is the second part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III : Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.Peer reviewe

    Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment

    Get PDF
    Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics

    Transcriptomics in Toxicogenomics, Part I: Experimental Design, Technologies, Publicly Available Data, and Regulatory Aspects

    Get PDF
    The starting point of successful hazard assessment is the generation of unbiased and trustworthy data. Conventional toxicity testing deals with extensive observations of phenotypic endpoints in vivo and complementing in vitro models. The increasing development of novel materials and chemical compounds dictates the need for a better understanding of the molecular changes occurring in exposed biological systems. Transcriptomics enables the exploration of organisms’ responses to environmental, chemical, and physical agents by observing the molecular alterations in more detail. Toxicogenomics integrates classical toxicology with omics assays, thus allowing the characterization of the mechanism of action (MOA) of chemical compounds, novel small molecules, and engineered nanomaterials (ENMs). Lack of standardization in data generation and analysis currently hampers the full exploitation of toxicogenomics-based evidence in risk assessment. To fill this gap, TGx methods need to take into account appropriate experimental design and possible pitfalls in the transcriptomic analyses as well as data generation and sharing that adhere to the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. In this review, we summarize the recent advancements in the design and analysis of DNA microarray, RNA sequencing (RNA-Seq), and single-cell RNA-Seq (scRNA-Seq) data. We provide guidelines on exposure time, dose and complex endpoint selection, sample quality considerations and sample randomization. Furthermore, we summarize publicly available data resources and highlight applications of TGx data to understand and predict chemical toxicity potential. Additionally, we discuss the efforts to implement TGx into regulatory decision making to promote alternative methods for risk assessment and to support the 3R (reduction, refinement, and replacement) concept. This review is the first part of a three-article series on Transcriptomics in Toxicogenomics. These initial considerations on Experimental Design, Technologies, Publicly Available Data, Regulatory Aspects, are the starting point for further rigorous and reliable data preprocessing and modeling, described in the second and third part of the review series

    Computationally prioritized drugs inhibit SARS-CoV-2 infection and syncytia formation

    Get PDF
    The pharmacological arsenal against the COVID-19 pandemic is largely based on generic anti-inflammatory strategies or poorly scalable solutions. Moreover, as the ongoing vaccination campaign is rolling slower than wished, affordable and effective therapeutics are needed. To this end, there is increasing attention toward computational methods for drug repositioning and de novo drug design. Here, multiple data-driven computational approaches are systematically integrated to perform a virtual screening and prioritize candidate drugs for the treatment of COVID-19. From the list of prioritized drugs, a subset of representative candidates to test in human cells is selected. Two compounds, 7-hydroxystaurosporine and bafetinib, show synergistic antiviral effects in vitro and strongly inhibit viral-induced syncytia formation. Moreover, since existing drug repositioning methods provide limited usable information for de novo drug design, the relevant chemical substructures of the identified drugs are extracted to provide a chemical vocabulary that may help to design new effective drugs.Peer reviewe
    corecore