4,501 research outputs found

    Alternative methods for regulatory toxicology – a state-of-the-art review

    Get PDF
    This state-of-the art review is based on the final report of a project carried out by the European Commission’s Joint Research Centre (JRC) for the European Chemicals Agency (ECHA). The aim of the project was to review the state of the science of non-standard methods that are available for assessing the toxicological and ecotoxicological properties of chemicals. Non-standard methods refer to alternatives to animal experiments, such as in vitro tests and computational models, as well as animal methods that are not covered by current regulatory guidelines. This report therefore reviews the current scientific status of non-standard methods for a range of human health and ecotoxicological endpoints, and provides a commentary on the mechanistic basis and regulatory applicability of these methods. For completeness, and to provide context, currently accepted (standard) methods are also summarised. In particular, the following human health endpoints are covered: a) skin irritation and corrosion; b) serious eye damage and eye irritation; c) skin sensitisation; d) acute systemic toxicity; e) repeat dose toxicity; f) genotoxicity and mutagenicity; g) carcinogenicity; h) reproductive toxicity (including effects on development and fertility); i) endocrine disruption relevant to human health; and j) toxicokinetics. In relation to ecotoxicological endpoints, the report focuses on non-standard methods for acute and chronic fish toxicity. While specific reference is made to the information needs of REACH, the Biocidal Products Regulation and the Classification, Labelling and Packaging Regulation, this review is also expected to be informative in relation to the possible use of alternative and non-standard methods in other sectors, such as cosmetics and plant protection products.JRC.I.5-Systems Toxicolog

    The Use of Computational Methods in the Toxicological Assessment of Chemicals in Food: Current Status and Future Prospects

    Get PDF
    A wide range of chemicals are intentionally added to, or unintentially found in, food products, often in very small amounts. Depending on the situation, the experimental data needed to complete a dietary risk assessment, which is the scientific basis for protecting human health, may not be available or obtainable, for reasons of cost, time and animal welfare. For example, toxicity data are often lacking for the metabolites and degradation products of pesticide active ingredients. There is therefore an interest in the development and application of efficient and effective non-animal methods for assessing chemical toxicity, including Quantitative Structure-Activity Relationship (QSAR) models and related computational methods. This report gives an overview of how computational methods are currently used in the field of food safety by national regulatory bodies, international advisory organisations and the food industry. On the basis of an international survey, a comprehensive literature review and a detailed QSAR analysis, a range of recommendations are made with the long-term aim of promoting the judicious use of suitable QSAR methods. The current status of QSAR methods is reviewed not only for toxicological endpoints relevant to dietary risk assessment, but also for Absorption, Distribution, Metabolism and Excretion (ADME) properties, which are often important in discriminating between the toxicological profiles of parent compounds and their reaction products. By referring to the concept of the Threshold of Toxicological Concern (TTC), the risk assessment context in which QSAR methods can be expected to be used is also discussed. This Joint Research Centre (JRC) Reference Report provides a summary and update of the findings obtained in a study carried out by the JRC under the terms of a contract awarded by the European Food Safety Authority (EFSA).JRC.DG.I.6-Systems toxicolog

    Machine Learning Toxicity Prediction: Latest Advances by Toxicity End Point

    Get PDF
    Machine learning (ML) models to predict the toxicity of small molecules have garnered great attention and have become widely used in recent years. Computational toxicity prediction is particularly advantageous in the early stages of drug discovery in order to filter out molecules with high probability of failing in clinical trials. This has been helped by the increase in the number of large toxicology databases available. However, being an area of recent application, a greater understanding of the scope and applicability of ML methods is still necessary. There are various kinds of toxic end points that have been predicted in silico. Acute oral toxicity, hepatotoxicity, cardiotoxicity, mutagenicity, and the 12 Tox21 data end points are among the most commonly investigated. Machine learning methods exhibit different performances on different data sets due to dissimilar complexity, class distributions, or chemical space covered, which makes it hard to compare the performance of algorithms over different toxic end points. The general pipeline to predict toxicity using ML has already been analyzed in various reviews. In this contribution, we focus on the recent progress in the area and the outstanding challenges, making a detailed description of the state-of-the-art models implemented for each toxic end point. The type of molecular representation, the algorithm, and the evaluation metric used in each research work are explained and analyzed. A detailed description of end points that are usually predicted, their clinical relevance, the available databases, and the challenges they bring to the field are also highlighted.Fil: Cavasotto, Claudio Norberto. Universidad Austral. Facultad de Ciencias Biomédicas. Instituto de Investigaciones en Medicina Traslacional. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Parque Centenario. Instituto de Investigaciones en Medicina Traslacional; ArgentinaFil: Scardino, Valeria. Universidad Austral; Argentin

    EURL ECVAM Status Report on the Development, Validation and Regulatory Acceptance of Alternative Methods and Approaches (2015)

    Get PDF
    The EURL ECVAM status report provides an update on the progress made in the development, validation and regulatory acceptance of alternative methods and approaches and their dissemination since the last report published in June 2014. It is informing on ongoing research and development activities, validation studies, peer reviews, recommendations, strategies and regulatory/international acceptance of alternative methods and approaches and dissemination activities. R&D activities within large European or International consortia continued in toxicity areas where 3Rs solutions are more difficult to find due to the underlying complexity of the area. On the other hand, toxicity areas where promising non-animal approaches have been developed, their validation and regulatory acceptance/international adoption could be progressed. Particular emphasis was given to the best and most intelligent combination and integration of these different non-animal approaches to ultimately obtain the required information without resorting to animal testing.JRC.I.5-Systems Toxicolog

    Quantitative toxicity prediction using topology based multi-task deep neural networks

    Full text link
    The understanding of toxicity is of paramount importance to human health and environmental protection. Quantitative toxicity analysis has become a new standard in the field. This work introduces element specific persistent homology (ESPH), an algebraic topology approach, for quantitative toxicity prediction. ESPH retains crucial chemical information during the topological abstraction of geometric complexity and provides a representation of small molecules that cannot be obtained by any other method. To investigate the representability and predictive power of ESPH for small molecules, ancillary descriptors have also been developed based on physical models. Topological and physical descriptors are paired with advanced machine learning algorithms, such as deep neural network (DNN), random forest (RF) and gradient boosting decision tree (GBDT), to facilitate their applications to quantitative toxicity predictions. A topology based multi-task strategy is proposed to take the advantage of the availability of large data sets while dealing with small data sets. Four benchmark toxicity data sets that involve quantitative measurements are used to validate the proposed approaches. Extensive numerical studies indicate that the proposed topological learning methods are able to outperform the state-of-the-art methods in the literature for quantitative toxicity analysis. Our online server for computing element-specific topological descriptors (ESTDs) is available at http://weilab.math.msu.edu/TopTox/Comment: arXiv admin note: substantial text overlap with arXiv:1703.1095

    EURL ECVAM Status Report on the Development, Validation and Regulatory Acceptance of Alternative Methods and Approaches (2013-April 2014)

    Get PDF
    The EURL ECVAM status report provides an update on the progress made in the development, validation and regulatory acceptance of alternative methods and approaches since the last report published in April 2013. It is informing on ongoing research and development activities, validation studies, peer reviews, recommendations, strategies and international acceptance of alternative methods and approaches. R&D activities are ongoing for the complex endpoints where the toxicological processes and the mechanistic understanding have not been sufficiently elucidated yet and for which 3Rs solutions are more difficult to find. On the other hand, good progress In the validation and regulatory acceptance is made in areas where non-animal alternative methods have been developed and validated and where the focus lies in an intelligent combination/ integration of the various non-animal approaches.JRC.I.5-Systems Toxicolog

    CATMoS: Collaborative Acute Toxicity Modeling Suite.

    Get PDF
    BACKGROUND: Humans are exposed to tens of thousands of chemical substances that need to be assessed for their potential toxicity. Acute systemic toxicity testing serves as the basis for regulatory hazard classification, labeling, and risk management. However, it is cost- and time-prohibitive to evaluate all new and existing chemicals using traditional rodent acute toxicity tests. In silico models built using existing data facilitate rapid acute toxicity predictions without using animals. OBJECTIVES: The U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Acute Toxicity Workgroup organized an international collaboration to develop in silico models for predicting acute oral toxicity based on five different end points: Lethal Dose 50 (LD50 value, U.S. Environmental Protection Agency hazard (four) categories, Globally Harmonized System for Classification and Labeling hazard (five) categories, very toxic chemicals [LD50 (LD50≤50mg/kg)], and nontoxic chemicals (LD50>2,000mg/kg). METHODS: An acute oral toxicity data inventory for 11,992 chemicals was compiled, split into training and evaluation sets, and made available to 35 participating international research groups that submitted a total of 139 predictive models. Predictions that fell within the applicability domains of the submitted models were evaluated using external validation sets. These were then combined into consensus models to leverage strengths of individual approaches. RESULTS: The resulting consensus predictions, which leverage the collective strengths of each individual model, form the Collaborative Acute Toxicity Modeling Suite (CATMoS). CATMoS demonstrated high performance in terms of accuracy and robustness when compared with in vivo results. DISCUSSION: CATMoS is being evaluated by regulatory agencies for its utility and applicability as a potential replacement for in vivo rat acute oral toxicity studies. CATMoS predictions for more than 800,000 chemicals have been made available via the National Toxicology Program's Integrated Chemical Environment tools and data sets (ice.ntp.niehs.nih.gov). The models are also implemented in a free, standalone, open-source tool, OPERA, which allows predictions of new and untested chemicals to be made. https://doi.org/10.1289/EHP8495

    Acute Toxicity Testing Without Animals: More Scientific and Less of a Gamble

    Get PDF
    In this report, we argue specifically that acute toxicity data should not be sought from animal tests. The underlying principle of such tests on rats and mice is that the results can be effectively extrapolated to humans. In fact, after nearly 80 years of use of these tests, the predictivity of rodent data for human acute toxic effects has been disputed but never proven
    • …
    corecore