141 research outputs found

    DNA methylation changes from primary cultures through senescence-bypass in Syrian hamster fetal cells initially exposed to benzo[a]pyrene

    Get PDF
    Current chemical testing strategies are limited in their ability to detect non-genotoxic carcinogens (NGTxC). Epigenetic anomalies develop during carcinogenesis regardless of whether the molecular initiating event is associated with genotoxic (GTxC) or NGTxC events; therefore, epigenetic markers may be harnessed to develop new approach methodologies that improve the detection of both types of carcinogens. This study used Syrian hamster fetal cells to establish the chronology of carcinogen-induced DNA methylation changes from primary cells until senescence-bypass as an essential carcinogenic step. Cells exposed to solvent control for 7 days were compared to naïve primary cultures, to cells exposed for 7 days to benzo[a]pyrene, and to cells at the subsequent transformation stages: normal colonies, morphologically transformed colonies, senescence, senescence-bypass, and sustained proliferation in vitro. DNA methylation changes identified by reduced representation bisulphite sequencing were minimal at day-7. Profound DNA methylation changes arose during cellular senescence and some of these early differentially methylated regions (DMRs) were preserved through the final sustained proliferation stage. A set of these DMRs (e.g., Pou4f1, Aifm3, B3galnt2, Bhlhe22, Gja8, Klf17, and L1l) were validated by pyrosequencing and their reproducibility was confirmed across multiple clones obtained from a different laboratory. These DNA methylation changes could serve as biomarkers to enhance objectivity and mechanistic understanding of cell transformation and could be used to predict senescence-bypass and chemical carcinogenicity

    Testing the theory of immune selection in cancers that break the rules of transplantation

    Get PDF
    Modification of cancer cells likely to reduce their immunogenicity, including loss or down-regulation of MHC molecules, is now well documented and has become the main support for the concept of immune surveillance. The evidence that these modifications, in fact, result from selection by the immune system is less clear, since the possibility that they may result from reorganized metabolism associated with proliferation or from cell de-differentiation remains. Here, we (a) survey old and new transplantation experiments that test the possibility of selection and (b) survey how transmissible tumours of dogs and Tasmanian devils provide naturally evolved tests of immune surveillance

    Incorporating New Technologies Into Toxicity Testing and Risk Assessment: Moving From 21st Century Vision to a Data-Driven Framework

    Get PDF
    Based on existing data and previous work, a series of studies is proposed as a basis toward a pragmatic early step in transforming toxicity testing. These studies were assembled into a data-driven framework that invokes successive tiers of testing with margin of exposure (MOE) as the primary metric. The first tier of the framework integrates data from high-throughput in vitro assays, in vitro-to-in vivo extrapolation (IVIVE) pharmacokinetic modeling, and exposure modeling. The in vitro assays are used to separate chemicals based on their relative selectivity in interacting with biological targets and identify the concentration at which these interactions occur. The IVIVE modeling converts in vitro concentrations into external dose for calculation of the point of departure (POD) and comparisons to human exposure estimates to yield a MOE. The second tier involves short-term in vivo studies, expanded pharmacokinetic evaluations, and refined human exposure estimates. The results from the second tier studies provide more accurate estimates of the POD and the MOE. The third tier contains the traditional animal studies currently used to assess chemical safety. In each tier, the POD for selective chemicals is based primarily on endpoints associated with a proposed mode of action, whereas the POD for nonselective chemicals is based on potential biological perturbation. Based on the MOE, a significant percentage of chemicals evaluated in the first 2 tiers could be eliminated from further testing. The framework provides a risk-based and animal-sparing approach to evaluate chemical safety, drawing broadly from previous experience but incorporating technological advances to increase efficiency

    Cross-Platform Comparison of Microarray-Based Multiple-Class Prediction

    Get PDF
    High-throughput microarray technology has been widely applied in biological and medical decision-making research during the past decade. However, the diversity of platforms has made it a challenge to re-use and/or integrate datasets generated in different experiments or labs for constructing array-based diagnostic models. Using large toxicogenomics datasets generated using both Affymetrix and Agilent microarray platforms, we carried out a benchmark evaluation of cross-platform consistency in multiple-class prediction using three widely-used machine learning algorithms. After an initial assessment of model performance on different platforms, we evaluated whether predictive signature features selected in one platform could be directly used to train a model in the other platform and whether predictive models trained using data from one platform could predict datasets profiled using the other platform with comparable performance. Our results established that it is possible to successfully apply multiple-class prediction models across different commercial microarray platforms, offering a number of important benefits such as accelerating the possible translation of biomarkers identified with microarrays to clinically-validated assays. However, this investigation focuses on a technical platform comparison and is actually only the beginning of exploring cross-platform consistency. Further studies are needed to confirm the feasibility of microarray-based cross-platform prediction, especially using independent datasets

    Should We Abandon the t-Test in the Analysis of Gene Expression Microarray Data: A Comparison of Variance Modeling Strategies

    Get PDF
    High-throughput post-genomic studies are now routinely and promisingly investigated in biological and biomedical research. The main statistical approach to select genes differentially expressed between two groups is to apply a t-test, which is subject of criticism in the literature. Numerous alternatives have been developed based on different and innovative variance modeling strategies. However, a critical issue is that selecting a different test usually leads to a different gene list. In this context and given the current tendency to apply the t-test, identifying the most efficient approach in practice remains crucial. To provide elements to answer, we conduct a comparison of eight tests representative of variance modeling strategies in gene expression data: Welch's t-test, ANOVA [1], Wilcoxon's test, SAM [2], RVM [3], limma [4], VarMixt [5] and SMVar [6]. Our comparison process relies on four steps (gene list analysis, simulations, spike-in data and re-sampling) to formulate comprehensive and robust conclusions about test performance, in terms of statistical power, false-positive rate, execution time and ease of use. Our results raise concerns about the ability of some methods to control the expected number of false positives at a desirable level. Besides, two tests (limma and VarMixt) show significant improvement compared to the t-test, in particular to deal with small sample sizes. In addition limma presents several practical advantages, so we advocate its application to analyze gene expression data

    Framework for the quality assurance of 'omics technologies considering GLP requirements

    Get PDF
    ‘Omics technologies are gaining importance to support regulatory toxicity studies. Prerequisites for performing ‘omics studies considering GLP principles were discussed at the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Workshop Applying ‘omics technologies in Chemical Risk Assessment. A GLP environment comprises a standard operating procedure system, proper pre-planning and documentation, and inspections of independent quality assurance staff. To prevent uncontrolled data changes, the raw data obtained in the respective ‘omics data recording systems have to be specifically defined. Further requirements include transparent and reproducible data processing steps, and safe data storage and archiving procedures. The software for data recording and processing should be validated, and data changes should be traceable or disabled. GLP-compliant quality assurance of ‘omics technologies appears feasible for many GLP requirements. However, challenges include (i) defining, storing, and archiving the raw data; (ii) transparent descriptions of data processing steps; (iii) software validation; and (iv) ensuring complete reproducibility of final results with respect to raw data. Nevertheless, ‘omics studies can be supported by quality measures (e.g., GLP principles) to ensure quality control, reproducibility and traceability of experiments. This enables regulators to use ‘omics data in a fit-for-purpose context, which enhances their applicability for risk assessment

    Exploring the use of internal and externalcontrols for assessing microarray technical performance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The maturing of gene expression microarray technology and interest in the use of microarray-based applications for clinical and diagnostic applications calls for quantitative measures of quality. This manuscript presents a retrospective study characterizing several approaches to assess technical performance of microarray data measured on the Affymetrix GeneChip platform, including whole-array metrics and information from a standard mixture of external spike-in and endogenous internal controls. Spike-in controls were found to carry the same information about technical performance as whole-array metrics and endogenous "housekeeping" genes. These results support the use of spike-in controls as general tools for performance assessment across time, experimenters and array batches, suggesting that they have potential for comparison of microarray data generated across species using different technologies.</p> <p>Results</p> <p>A layered PCA modeling methodology that uses data from a number of classes of controls (spike-in hybridization, spike-in polyA+, internal RNA degradation, endogenous or "housekeeping genes") was used for the assessment of microarray data quality. The controls provide information on multiple stages of the experimental protocol (e.g., hybridization, RNA amplification). External spike-in, hybridization and RNA labeling controls provide information related to both assay and hybridization performance whereas internal endogenous controls provide quality information on the biological sample. We find that the variance of the data generated from the external and internal controls carries critical information about technical performance; the PCA dissection of this variance is consistent with whole-array quality assessment based on a number of quality assurance/quality control (QA/QC) metrics.</p> <p>Conclusions</p> <p>These results provide support for the use of both external and internal RNA control data to assess the technical quality of microarray experiments. The observed consistency amongst the information carried by internal and external controls and whole-array quality measures offers promise for rationally-designed control standards for routine performance monitoring of multiplexed measurement platforms.</p

    Evaluation of two commercial global miRNA expression profiling platforms for detection of less abundant miRNAs

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>microRNAs (miRNA) are short, endogenous transcripts that negatively regulate the expression of specific mRNA targets. miRNAs are found both in tissues and body fluids such as plasma. A major perspective for the use of miRNAs in the clinical setting is as diagnostic plasma markers for neoplasia. While miRNAs are abundant in tissues, they are often scarce in plasma. For quantification of miRNA in plasma it is therefore of importance to use a platform with high sensitivity and linear performance in the low concentration range. This motivated us to evaluate the performance of three commonly used commercial miRNA quantification platforms: GeneChip miRNA 2.0 Array, miRCURY Ready-to-Use PCR, Human panel I+II V1.M, and TaqMan Human MicroRNA Array v3.0.</p> <p>Results</p> <p>Using synthetic miRNA samples and plasma RNA samples spiked with different ratios of 174 synthetic miRNAs we assessed the performance characteristics reproducibility, recovery, specificity, sensitivity and linearity. It was found that while the qRT-PCR based platforms were sufficiently sensitive to reproducibly detect miRNAs at the abundance levels found in human plasma, the array based platform was not. At high miRNA levels both qRT-PCR based platforms performed well in terms of specificity, reproducibility and recovery. At low miRNA levels, as in plasma, the miRCURY platform showed better sensitivity and linearity than the TaqMan platform.</p> <p>Conclusion</p> <p>For profiling clinical samples with low miRNA abundance, such as plasma samples, the miRCURY platform with its better sensitivity and linearity would probably be superior.</p
    corecore