112 research outputs found

    Navigating through the minefield of read-across frameworks: A commentary perspective

    Get PDF
    Read-across is a popular data gap filling technique used within analogue and category approaches for both regulatory and product stewardship purposes. In recent years there have been many efforts focused on the challenges involved in read-across development, its scientific justification and documentation for both chemical hazard and risk assessment purposes. Here, we summarise a selection of the read-across frameworks published in technical guidance documents or in the literature, and review their respective similarities and differences. There was a great deal of consensus between the different frameworks in terms of the general steps outlined and the similarity contexts considered although the terminology, decision context (chemical hazard and/or risk assessment purposes) and scope varied. A harmonised hybrid framework is proposed to help reconcile the common guiding principles and steps of the read-across process which should be helpful in expanding the scope and decision context of the existing frameworks. This harmonised framework is also intended to illustrate where generalised and systematic read-across approaches taking into consideration new approach methodology (NAM) information can be applied

    AMBIT RESTful web services: an implementation of the OpenTox application programming interface

    Get PDF
    The AMBIT web services package is one of the several existing independent implementations of the OpenTox Application Programming Interface and is built according to the principles of the Representational State Transfer (REST) architecture. The Open Source Predictive Toxicology Framework, developed by the partners in the EC FP7 OpenTox project, aims at providing a unified access to toxicity data and predictive models, as well as validation procedures. This is achieved by i) an information model, based on a common OWL-DL ontology ii) links to related ontologies; iii) data and algorithms, available through a standardized REST web services interface, where every compound, data set or predictive method has a unique web address, used to retrieve its Resource Description Framework (RDF) representation, or initiate the associated calculations

    A Mechanistic Framework for Integrating Chemical Structure and High-Throughput Screening Results to Improve Toxicity Predictions

    Get PDF
    Adverse Outcome Pathways (AOPs) establish a connection between a molecular initiating event (MIE) and an adverse outcome. Detailed understanding of the MIE provides the ideal data for determining chemical properties required to elicit the MIE. This study utilized high-throughput screening data from the ToxCast program, coupled with chemical structural information, to generate chemical clusters using three similarity methods pertaining to nine MIEs within an AOP network for hepatic steatosis. Three case studies demonstrate the utility of the mechanistic information held by the MIE for integrating biological and chemical data. Evaluation of the chemical clusters activating the glucocorticoid receptor identified activity differences in chemicals within a cluster. Comparison of the estrogen receptor results with previous work showed that bioactivity data and structural alerts can be combined to improve predictions in a customizable way where bioactivity data are limited. The aryl hydrocarbon receptor (AHR) highlighted that while structural data can be used to offset limited data for new screening efforts, not all ToxCast targets have sufficient data to define robust chemical clusters. In this context, an alternative to additional receptor assays is proposed where assays for proximal key events downstream of AHR activation could be used to enhance confidence in active calls. These case studies illustrate how the AOP framework can support an iterative process whereby in vitro toxicity testing and chemical structure can be combined to improve toxicity predictions. In vitro assays can inform the development of structural alerts linking chemical structure to toxicity. Consequently, structurally related chemical groups can facilitate identification of assays that would be informative for a specific MIE. Together, these activities form a virtuous cycle where the mechanistic basis for the in vitro results and the breadth of the structural alerts continually improve over time to better predict activity of chemicals for which limited toxicity data exist

    A scheme to evaluate structural alerts to predict toxicity – Assessing confidence by characterising uncertainties

    Get PDF
    Structure-activity relationships (SARs) in toxicology have enabled the formation of structural rules which, when coded as structural alerts, are an essential tool in in silico toxicology. Whilst other in silico methods have approaches for their evaluation, there is no formal process to assess the confidence that may be associated with a structural alert. This investigation proposes twelve criteria to assess the uncertainty associated with structural alerts, allowing for an assessment of confidence. The criteria are based around the stated purpose, description of the chemistry, toxicology and mechanism, performance and coverage, as well as corroborating and supporting evidence of the alert. Alerts can be given a confidence assessment and score, enabling the identification of areas where more information may be beneficial. The scheme to evaluate structural alerts was placed in the context of various use cases for industrial and regulatory applications. The analysis of alerts, and consideration of the evaluation scheme, identifies the different characteristics an alert may have, such as being highly specific or generic. These characteristics may determine when an alert can be used for specific uses such as identification of analogues for read-across or hazard identification

    Toward Good Read-Across Practice (GRAP) guidance.

    Get PDF
    Grouping of substances and utilizing read-across of data within those groups represents an important data gap filling technique for chemical safety assessments. Categories/analogue groups are typically developed based on structural similarity and, increasingly often, also on mechanistic (biological) similarity. While read-across can play a key role in complying with legislations such as the European REACH regulation, the lack of consensus regarding the extent and type of evidence necessary to support it often hampers its successful application and acceptance by regulatory authorities. Despite a potentially broad user community, expertise is still concentrated across a handful of organizations and individuals. In order to facilitate the effective use of read-across, this document aims to summarize the state-of-the-art, summarizes insights learned from reviewing ECHA published decisions as far as the relative successes/pitfalls surrounding read-across under REACH and compile the relevant activities and guidance documents. Special emphasis is given to the available existing tools and approaches, an analysis of ECHA's published final decisions associated with all levels of compliance checks and testing proposals, the consideration and expression of uncertainty, the use of biological support data and the impact of the ECHA Read-Across Assessment Framework (RAAF) published in 2015

    CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    Get PDF
    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis

    Computational toxicology using the OpenTox application programming interface and Bioclipse

    Get PDF
    BACKGROUND: Toxicity is a complex phenomenon involving the potential adverse effect on a range of biological functions. Predicting toxicity involves using a combination of experimental data (endpoints) and computational methods to generate a set of predictive models. Such models rely strongly on being able to integrate information from many sources. The required integration of biological and chemical information sources requires, however, a common language to express our knowledge ontologically, and interoperating services to build reliable predictive toxicology applications. FINDINGS: This article describes progress in extending the integrative bio- and cheminformatics platform Bioclipse to interoperate with OpenTox, a semantic web framework which supports open data exchange and toxicology model building. The Bioclipse workbench environment enables functionality from OpenTox web services and easy access to OpenTox resources for evaluating toxicity properties of query molecules. Relevant cases and interfaces based on ten neurotoxins are described to demonstrate the capabilities provided to the user. The integration takes advantage of semantic web technologies, thereby providing an open and simplifying communication standard. Additionally, the use of ontologies ensures proper interoperation and reliable integration of toxicity information from both experimental and computational sources. CONCLUSIONS: A novel computational toxicity assessment platform was generated from integration of two open science platforms related to toxicology: Bioclipse, that combines a rich scriptable and graphical workbench environment for integration of diverse sets of information sources, and OpenTox, a platform for interoperable toxicology data and computational services. The combination provides improved reliability and operability for handling large data sets by the use of the Open Standards from the OpenTox Application Programming Interface. This enables simultaneous access to a variety of distributed predictive toxicology databases, and algorithm and model resources, taking advantage of the Bioclipse workbench handling the technical layers

    GRADE Guidelines 30: the GRADE approach to assessing the certainty of modeled evidence—An overview in the context of health decision-making

    Get PDF
    Objectives: The objective of the study is to present the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) conceptual approach to the assessment of certainty of evidence from modeling studies (i.e., certainty associated with model outputs). / Study Design and Setting: Expert consultations and an international multidisciplinary workshop informed development of a conceptual approach to assessing the certainty of evidence from models within the context of systematic reviews, health technology assessments, and health care decisions. The discussions also clarified selected concepts and terminology used in the GRADE approach and by the modeling community. Feedback from experts in a broad range of modeling and health care disciplines addressed the content validity of the approach. / Results: Workshop participants agreed that the domains determining the certainty of evidence previously identified in the GRADE approach (risk of bias, indirectness, inconsistency, imprecision, reporting bias, magnitude of an effect, dose–response relation, and the direction of residual confounding) also apply when assessing the certainty of evidence from models. The assessment depends on the nature of model inputs and the model itself and on whether one is evaluating evidence from a single model or multiple models. We propose a framework for selecting the best available evidence from models: 1) developing de novo, a model specific to the situation of interest, 2) identifying an existing model, the outputs of which provide the highest certainty evidence for the situation of interest, either “off-the-shelf” or after adaptation, and 3) using outputs from multiple models. We also present a summary of preferred terminology to facilitate communication among modeling and health care disciplines. / Conclusion: This conceptual GRADE approach provides a framework for using evidence from models in health decision-making and the assessment of certainty of evidence from a model or models. The GRADE Working Group and the modeling community are currently developing the detailed methods and related guidance for assessing specific domains determining the certainty of evidence from models across health care–related disciplines (e.g., therapeutic decision-making, toxicology, environmental health, and health economics)

    A genomic biomarker signature can predict skin sensitizers using a cell-based in vitro alternative to animal tests

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power.</p> <p>Results</p> <p>We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization.</p> <p>Conclusions</p> <p>A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests.</p

    Collaborative development of predictive toxicology applications

    Get PDF
    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals
    corecore