102 research outputs found

    Chemical Similarity and Threshold of Toxicological Concern (TTC) Approaches: Report of an ECB Workshop held in Ispra, November 2005

    Get PDF
    There are many national, regional and international programmes – either regulatory or voluntary – to assess the hazards or risks of chemical substances to humans and the environment. The first step in making a hazard assessment of a chemical is to ensure that there is adequate information on each of the endpoints. If adequate information is not available then additional data is needed to complete the dataset for this substance. For reasons of resources and animal welfare, it is important to limit the number of tests that have to be conducted, where this is scientifically justifiable. One approach is to consider closely related chemicals as a group, or chemical category, rather than as individual chemicals. In a category approach, data for chemicals and endpoints that have been already tested are used to estimate the hazard for untested chemicals and endpoints. Categories of chemicals are selected on the basis of similarities in biological activity which is associated with a common underlying mechanism of action. A homologous series of chemicals exhibiting a coherent trend in biological activity can be rationalised on the basis of a constant change in structure. This type of grouping is relatively straightforward. The challenge lies in identifying the relevant chemical structural and physicochemical characteristics that enable more sophisticated groupings to be made on the basis of similarity in biological activity and hence purported mechanism of action. Linking two chemicals together and rationalising their similarity with reference to one or more endpoints has been very much carried out on an ad hoc basis. Even with larger groups, the process and approach is ad hoc and based on expert judgement. There still appears to be very little guidance about the tools and approaches for grouping chemicals systematically. In November 2005, the ECB Workshop on Chemical Similarity and Thresholds of Toxicological Concern (TTC) Approaches was convened to identify the available approaches that currently exist to encode similarity and how these can be used to facilitate the grouping of chemicals. This report aims to capture the main themes that were discussed. In particular, it outlines a number of different approaches that can facilitate the formation of chemical groupings in terms of the context under consideration and the likely information that would be required. Grouping methods were divided into one of four classes – knowledge-based, analogue-based, unsupervised, and supervised. A flowchart was constructed to attempt to capture a possible work flow to highlight where and how these approaches might be best applied.JRC.I.3-Toxicology and chemical substance

    A Similarity Based Approach for Chemical Category Classification

    Get PDF
    This report aims to describe the main outcomes of an IHCP Exploratory Research Project carried out during 2005 by the European Chemicals Bureau (Computational Toxicology Action). The original aim of this project was to develop a computational method to facilitate the classification of chemicals into similarity-based chemical categories, which would be both useful for building (Q)SAR models (research application) and for defining chemical category proposals (regulatory application).JRC.I-Institute for Health and Consumer Protection (Ispra

    Predicting drug metabolism: experiment and/or computation?

    Get PDF
    Drug metabolism can produce metabolites with physicochemical and pharmacological properties that differ substantially from those of the parent drug, and consequently has important implications for both drug safety and efficacy. To reduce the risk of costly clinical-stage attrition due to the metabolic characteristics of drug candidates, there is a need for efficient and reliable ways to predict drug metabolism in vitro, in silico and in vivo. In this Perspective, we provide an overview of the state of the art of experimental and computational approaches for investigating drug metabolism. We highlight the scope and limitations of these methods, and indicate strategies to harvest the synergies that result from combining measurement and prediction of drug metabolism.This is the accepted manuscript of a paper published in Nature Reviews Drug Discovery (Kirchmair J, Göller AH, Lang D, Kunze J, Testa B, Wilson ID, Glen RC, Schneider G, Nature Reviews Drug Discovery, 2015, 14, 387–404, doi:10.1038/nrd4581). The final version is available at http://dx.doi.org/10.1038/nrd458

    SpheraCosmolife: a new tool for the risk assessment of cosmetic products.

    Get PDF
    A new, freely available software for cosmetic products has been designed that considers the regulatory framework for cosmetics. The software allows an overall toxicological evaluation of cosmetic ingredients without the need for additional testing and, depending on the product type, it applies defined exposure scenarios to derive risk for consumers. It takes regulatory thresholds into account and uses either experimental values, if available, or predictions. Based on the exper­imental or predicted no observed adverse effect level (NOAEL), the software can define a point of departure (POD), which is used to calculate the margin of safety (MoS) of the query chemicals. The software also provides other toxico­logical properties, such as mutagenicity, skin sensitization, and the threshold of toxicological concern (TTC) to provide an overall evaluation of the potential chemical hazard. Predictions are calculated using in silico models implemented within the VEGA software. The full list of ingredients of a cosmetic product can be processed at the same time, at the effective concentrations in the product as given by the user. SpheraCosmolife is designed as a support tool for safety assessors of cosmetic products and can be used to prioritize the cosmetic ingredients or formulations according to their potential risk to consumers. The major novelty of the tool is that it wraps a series of models (some of them new) into a single, user-friendly software system

    Skin Doctor: Machine learning models for skin sensitization prediction that provide estimates and indicators of prediction reliability

    Get PDF
    The ability to predict the skin sensitization potential of small organic molecules is of high importance to the development and safe application of cosmetics, drugs and pesticides. One of the most widely accepted methods for predicting this hazard is the local lymph node assay (LLNA). The goal of this work was to develop in silico models for the prediction of the skin sensitization potential of small molecules that go beyond the state of the art, with larger LLNA data sets and, most importantly, a robust and intuitive definition of the applicability domain, paired with additional indicators of the reliability of predictions. We explored a large variety of molecular descriptors and fingerprints in combination with random forest and support vector machine classifiers. The most suitable models were tested on holdout data, on which they yielded competitive performance (Matthews correlation coefficients up to 0.52; accuracies up to 0.76; areas under the receiver operating characteristic curves up to 0.83). The most favorable models are available via a public web service that, in addition to predictions, provides assessments of the applicability domain and indicators of the reliability of the individual predictions. View Full-Text Keywords: skin sensitization potential; prediction; in silico models; machine learning; local lymph node assay (LLNA); cosmetics; drugs; pesticides; chemical space; applicability domainpublishedVersio

    The Use of Computational Methods in the Grouping and Assessment of Chemicals - Preliminary Investigations

    Get PDF
    This document presents a perspective of how computational approaches could potentially be used in the grouping and assessment of chemicals, and especially in the application of read-across and the development of chemical categories. The perspective is based on experience gained by the authors during 2006 and 2007, when the Joint Research Centre's European Chemicals Bureau was directly involved in the drafting of technical guidance on the applicability of computational methods under REACH. Some of the experience gained and ideas developed resulted from a number of research-based case studies conducted in-house during 2006 and the first half of 2007. The case studies were performed to explore the possible applications of computational methods in the assessment of chemicals and to contribute to the development of technical guidance. Not all of the methods explored and ideas developed are explicitly included in the final guidance documentation for REACH. Many of the methods are novel, and are still being refined and assessed by the scientific community. At present, many of the methods have not been tried and tested in the regulatory context. The authors therefore hope that the perspective and case studies compiled in this document, whilst not intended to serve as guidance, will nevertheless provide an input to further research efforts aimed at developing computational methods, and at exploring their potential applicability in regulatory assessment of chemicals.JRC.I.3-Toxicology and chemical substance

    Tuning hERG Out: Antitarget QSAR Models for Drug Development

    Get PDF
    Several non-cardiovascular drugs have been withdrawn from the market due to their inhibition of hERG K+ channels that can potentially lead to severe heart arrhythmia and death. As hERG safety testing is a mandatory FDA-required procedure, there is a considerable interest for developing predictive computational tools to identify and filter out potential hERG blockers early in the drug discovery process. In this study, we aimed to generate predictive and well-characterized quantitative structure–activity relationship (QSAR) models for hERG blockage using the largest publicly available dataset of 11,958 compounds from the ChEMBL database. The models have been developed and validated according to OECD guidelines using four types of descriptors and four different machine-learning techniques. The classification accuracies discriminating blockers from non-blockers were as high as 0.83–0.93 on external set. Model interpretation revealed several SAR rules, which can guide structural optimization of some hERG blockers into non-blockers. We have also applied the generated models for screening the World Drug Index (WDI) database and identify putative hERG blockers and non-blockers among currently marketed drugs. The developed models can reliably identify blockers and non-blockers, which could be useful for the scientific community. A freely accessible web server has been developed allowing users to identify putative hERG blockers and non-blockers in chemical libraries of their interest (http://labmol.farmacia.ufg.br/predherg)

    Alternative methods for regulatory toxicology – a state-of-the-art review

    Get PDF
    This state-of-the art review is based on the final report of a project carried out by the European Commission’s Joint Research Centre (JRC) for the European Chemicals Agency (ECHA). The aim of the project was to review the state of the science of non-standard methods that are available for assessing the toxicological and ecotoxicological properties of chemicals. Non-standard methods refer to alternatives to animal experiments, such as in vitro tests and computational models, as well as animal methods that are not covered by current regulatory guidelines. This report therefore reviews the current scientific status of non-standard methods for a range of human health and ecotoxicological endpoints, and provides a commentary on the mechanistic basis and regulatory applicability of these methods. For completeness, and to provide context, currently accepted (standard) methods are also summarised. In particular, the following human health endpoints are covered: a) skin irritation and corrosion; b) serious eye damage and eye irritation; c) skin sensitisation; d) acute systemic toxicity; e) repeat dose toxicity; f) genotoxicity and mutagenicity; g) carcinogenicity; h) reproductive toxicity (including effects on development and fertility); i) endocrine disruption relevant to human health; and j) toxicokinetics. In relation to ecotoxicological endpoints, the report focuses on non-standard methods for acute and chronic fish toxicity. While specific reference is made to the information needs of REACH, the Biocidal Products Regulation and the Classification, Labelling and Packaging Regulation, this review is also expected to be informative in relation to the possible use of alternative and non-standard methods in other sectors, such as cosmetics and plant protection products.JRC.I.5-Systems Toxicolog

    e-Sweet: A Machine-Learning Based Platform for the Prediction of Sweetener and Its Relative Sweetness

    Get PDF
    Artificial sweeteners (AS) can elicit the strong sweet sensation with the low or zero calorie, and are widely used to replace the nutritive sugar in the food and beverage industry. However, the safety issue of current AS is still controversial. Thus, it is imperative to develop more safe and potent AS. Due to the costly and laborious experimental-screening of AS, in-silico sweetener/sweetness prediction could provide a good avenue to identify the potential sweetener candidates before experiment. In this work, we curate the largest dataset of 530 sweeteners and 850 non-sweeteners, and collect the second largest dataset of 352 sweeteners with the relative sweetness (RS) from the literature. In light of these experimental datasets, we adopt five machine-learning methods and conformational-independent molecular fingerprints to derive the classification and regression models for the prediction of sweetener and its RS, respectively via the consensus strategy. Our best classification model achieves the 95% confidence intervals for the accuracy (0.91 ± 0.01), precision (0.90 ± 0.01), specificity (0.94 ± 0.01), sensitivity (0.86 ± 0.01), F1-score (0.88 ± 0.01), and NER (Non-error Rate: 0.90 ± 0.01) on the test set, which outperforms the model (NER = 0.85) of Rojas et al. in terms of NER, and our best regression model gives the 95% confidence intervals for the R2(test set) and ΔR2 [referring to |R2(test set)- R2(cross-validation)|] of 0.77 ± 0.01 and 0.03 ± 0.01, respectively, which is also better than the other works based on the conformation-independent 2D descriptors (e.g., 2D Dragon) according to R2(test set) and ΔR2. Our models are obtained by averaging over nineteen data-splitting schemes, and fully comply with the guidelines of Organization for Economic Cooperation and Development (OECD), which are not completely followed by the previous relevant works that are all on the basis of only one random data-splitting scheme for the cross-validation set and test set. Finally, we develop a user-friendly platform “e-Sweet” for the automatic prediction of sweetener and its corresponding RS. To our best knowledge, it is a first and free platform that can enable the experimental food scientists to exploit the current machine-learning methods to boost the discovery of more AS with the low or zero calorie content
    corecore