21 research outputs found

    Development and application of consensus in silico models for advancing high-throughput toxicological predictions

    Get PDF
    Computational toxicology models have been successfully implemented to prioritize and screen chemicals. There are numerous in silico (quantitative) structure–activity relationship ([Q]SAR) models for the prediction of a range of human-relevant toxicological endpoints, but for a given endpoint and chemical, not all predictions are identical due to differences in their training sets, algorithms, and methodology. This poses an issue for high-throughput screening of a large chemical inventory as it necessitates several models to cover diverse chemistries but will then generate data conflicts. To address this challenge, we developed a consensus modeling strategy to combine predictions obtained from different existing in silico (Q)SAR models into a single predictive value while also expanding chemical space coverage. This study developed consensus models for nine toxicological endpoints relating to estrogen receptor (ER) and androgen receptor (AR) interactions (i.e., binding, agonism, and antagonism) and genotoxicity (i.e., bacterial mutation, in vitro chromosomal aberration, and in vivo micronucleus). Consensus models were created by combining different (Q)SAR models using various weighting schemes. As a multi-objective optimization problem, there is no single best consensus model, and therefore, Pareto fronts were determined for each endpoint to identify the consensus models that optimize the multiple-criterion decisions simultaneously. Accordingly, this work presents sets of solutions for each endpoint that contain the optimal combination, regardless of the trade-off, with the results demonstrating that the consensus models improved both the predictive power and chemical space coverage. These solutions were further analyzed to find trends between the best consensus models and their components. Here, we demonstrate the development of a flexible and adaptable approach for in silico consensus modeling and its application across nine toxicological endpoints related to ER activity, AR activity, and genotoxicity. These consensus models are developed to be integrated into a larger multi-tier NAM-based framework to prioritize chemicals for further investigation and support the transition to a non-animal approach to risk assessment in Canada

    emerging technologies for food and drug safety

    Get PDF
    Abstract Emerging technologies are playing a major role in the generation of new approaches to assess the safety of both foods and drugs. However, the integration of emerging technologies in the regulatory decision-making process requires rigorous assessment and consensus amongst international partners and research communities. To that end, the Global Coalition for Regulatory Science Research (GCRSR) in partnership with the Brazilian Health Surveillance Agency (ANVISA) hosted the seventh Global Summit on Regulatory Science (GSRS17) in Brasilia, Brazil on September 18–20, 2017 to discuss the role of new approaches in regulatory science with a specific emphasis on applications in food and medical product safety. The global regulatory landscape concerning the application of new technologies was assessed in several countries worldwide. Challenges and issues were discussed in the context of developing an international consensus for objective criteria in the development, application and review of emerging technologies. The need for advanced approaches to allow for faster, less expensive and more predictive methodologies was elaborated. In addition, the strengths and weaknesses of each new approach was discussed. And finally, the need for standards and reproducible approaches was reviewed to enhance the application of the emerging technologies to improve food and drug safety. The overarching goal of GSRS17 was to provide a venue where regulators and researchers meet to develop collaborations addressing the most pressing scientific challenges and facilitate the adoption of novel technical innovations to advance the field of regulatory science

    DataSheet1_Novel machine learning models to predict endocrine disruption activity for high-throughput chemical screening.docx

    No full text
    An area of ongoing concern in toxicology and chemical risk assessment is endocrine disrupting chemicals (EDCs). However, thousands of legacy chemicals lack the toxicity testing required to assess their respective EDC potential, and this is where computational toxicology can play a crucial role. The US (United States) Environmental Protection Agency (EPA) has run two programs, the Collaborative Estrogen Receptor Activity Project (CERAPP) and the Collaborative Modeling Project for Receptor Activity (CoMPARA) which aim to predict estrogen and androgen activity, respectively. The US EPA solicited research groups from around the world to provide endocrine receptor activity Qualitative (or Quantitative) Structure Activity Relationship ([Q]SAR) models and then combined them to create consensus models for different toxicity endpoints. Random Forest (RF) models were developed to cover a broader range of substances with high predictive capabilities using large datasets from CERAPP and CoMPARA for estrogen and androgen activity, respectively. By utilizing simple descriptors from open-source software and large training datasets, RF models were created to expand the domain of applicability for predicting endocrine disrupting activity and help in the screening and prioritization of extensive chemical inventories. In addition, RFs were trained to conservatively predict the activity, meaning models are more likely to make false-positive predictions to minimize the number of False Negatives. This work presents twelve binary and multi-class RF models to predict binding, agonism, and antagonism for estrogen and androgen receptors. The RF models were found to have high predictive capabilities compared to other in silico modes, with some models reaching balanced accuracies of 93% while having coverage of 89%. These models are intended to be incorporated into evolving priority-setting workflows and integrated strategies to support the screening and selection of chemicals for further testing and assessment by identifying potential endocrine-disrupting substances.</p

    Table1_Development and application of consensus in silico models for advancing high-throughput toxicological predictions.DOCX

    No full text
    Computational toxicology models have been successfully implemented to prioritize and screen chemicals. There are numerous in silico (quantitative) structure–activity relationship ([Q]SAR) models for the prediction of a range of human-relevant toxicological endpoints, but for a given endpoint and chemical, not all predictions are identical due to differences in their training sets, algorithms, and methodology. This poses an issue for high-throughput screening of a large chemical inventory as it necessitates several models to cover diverse chemistries but will then generate data conflicts. To address this challenge, we developed a consensus modeling strategy to combine predictions obtained from different existing in silico (Q)SAR models into a single predictive value while also expanding chemical space coverage. This study developed consensus models for nine toxicological endpoints relating to estrogen receptor (ER) and androgen receptor (AR) interactions (i.e., binding, agonism, and antagonism) and genotoxicity (i.e., bacterial mutation, in vitro chromosomal aberration, and in vivo micronucleus). Consensus models were created by combining different (Q)SAR models using various weighting schemes. As a multi-objective optimization problem, there is no single best consensus model, and therefore, Pareto fronts were determined for each endpoint to identify the consensus models that optimize the multiple-criterion decisions simultaneously. Accordingly, this work presents sets of solutions for each endpoint that contain the optimal combination, regardless of the trade-off, with the results demonstrating that the consensus models improved both the predictive power and chemical space coverage. These solutions were further analyzed to find trends between the best consensus models and their components. Here, we demonstrate the development of a flexible and adaptable approach for in silico consensus modeling and its application across nine toxicological endpoints related to ER activity, AR activity, and genotoxicity. These consensus models are developed to be integrated into a larger multi-tier NAM-based framework to prioritize chemicals for further investigation and support the transition to a non-animal approach to risk assessment in Canada.</p

    Comprehensive interpretation of in vitro micronucleus test results for 292 chemicals: from hazard identification to risk assessment application.

    No full text
    Funder: Canada Research Chairs; doi: http://dx.doi.org/10.13039/501100001804Risk assessments are increasingly reliant on information from in vitro assays. The in vitro micronucleus test (MNvit) is a genotoxicity test that detects chromosomal abnormalities, including chromosome breakage (clastogenicity) and/or whole chromosome loss (aneugenicity). In this study, MNvit datasets for 292 chemicals, generated by the US EPA's ToxCast program, were evaluated using a decision tree-based pipeline for hazard identification. Chemicals were tested with 19 concentrations (n = 1) up to 200 µM, in the presence and absence of Aroclor 1254-induced rat liver S9. To identify clastogenic chemicals, %MN values at each concentration were compared to a distribution of batch-specific solvent controls; this was followed by cytotoxicity assessment and benchmark concentration (BMC) analyses. The approach classified 157 substances as positives, 25 as negatives, and 110 as inconclusive. Using the approach described in Bryce et al. (Environ Mol Mutagen 52:280-286, 2011), we identified 15 (5%) aneugens. IVIVE (in vitro to in vivo extrapolation) was employed to convert BMCs into administered equivalent doses (AEDs). Where possible, AEDs were compared to points of departure (PODs) for traditional genotoxicity endpoints; AEDs were generally lower than PODs based on in vivo endpoints. To facilitate interpretation of in vitro MN assay concentration-response data for risk assessment, exposure estimates were utilized to calculate bioactivity exposure ratio (BER) values. BERs for 50 clastogens and two aneugens had AEDs that approached exposure estimates (i.e., BER < 100); these chemicals might be considered priorities for additional testing. This work provides a framework for the use of high-throughput in vitro genotoxicity testing for priority setting and chemical risk assessment

    Development of an Evidence-Based Risk Assessment Framework

    No full text
    Assessment of potential human health risks associated with environmental and other agents requires careful evaluation of all available and relevant evidence for the agent of interest, including both data-rich and data-poor agents. With the advent of new approach methodologies in toxicological risk assessment, guidance on integrating evidence from mul-tiple evidence streams is needed to ensure that all available data is given due consideration in both qualitative and quantitative risk assessment. The present report summarizes the discussions among academic, government, and private sector participants from North America and Europe in an international workshop convened to explore the development of an evidence-based risk assessment framework, taking into account all available evidence in an appropriate manner in order to arrive at the best possible characterization of potential human health risks and associated uncertainty. Although consensus among workshop participants was not a specific goal, there was general agreement on the key consider-ations involved in evidence-based risk assessment incorporating 21st century science into human health risk assessment. These considerations have been embodied into an overarching prototype framework for evidence integration that will be explored in more depth in a follow-up meeting.publishe

    Increasing Scientific Confidence in Adverse Outcome Pathways: Application of Tailored Bradford-Hill Considerations for Evaluating Weight of Evidence

    No full text
    Systematic consideration of scientific support is a critical element in developing and, ultimately, using adverse outcome pathways (AOPs) for various regulatory applications. Though weight of evidence (WoE) analysis has been proposed as a basis for assessment of the maturity and level of confidence in an AOP, methodologies and tools are still being formalized. The Organization for Economic Co-operation and Development (OECD) Users’ Handbook Supplement to the Guidance Document for Developing and Assessing AOPs (OECD, 2014a; hereafter referred to as the OECD AOP Handbook) provides tailored Bradford-Hill (BH) considerations for systematic assessment of confidence in a given AOP. These considerations include 1) biological plausibility and 2) empirical support (dose-response, temporality, and incidence) for key event relationships (KERs), and 3) essentiality of key events (KEs). Here, we test the application of these tailored BH considerations and the guidance outlined in the OECD AOP Handbook using a number of case examples to increase experience in more transparently documenting rationales for assigned levels of confidence to KEs and KERs, and to promote consistency in evaluation within and across AOPs. The major lessons learned from experience are documented, and taken together with the case examples, should contribute to better common understanding of the nature and form of documentation required to increase confidence in the application of AOPs for specific uses. Based on the tailored BH considerations and defining questions, a prototype quantitative model for assessing the WoE of an AOP using tools of multi-criteria decision analysis (MCDA) is described. The applicability of the approach is also demonstrated using the case example aromatase inhibition leading to reproductive dysfunction in fish. Following the acquisition of additional experience in the development and assessment of AOPs, further refinement of parameterization of the model through expert elicitation is recommended. Overall, the application of quantitative WoE approaches hold promise to enhance the rigor, transparency and reproducibility for AOP WoE determinations and may play an important role in delineating areas where research would have the greatest impact on improving the overall confidence in the AOP.JRC.F.3-Chemicals Safety and Alternative Method
    corecore