15 research outputs found

    Integrating in silico models and read-across methods for predicting toxicity of chemicals: A step-wise strategy

    Get PDF
    Abstract In silico methods and models are increasingly used for predicting properties of chemicals for hazard identification and hazard characterisation in the absence of experimental toxicity data. Many in silico models are available and can be used individually or in an integrated fashion. Whilst such models offer major benefits to toxicologists, risk assessors and the global scientific community, the lack of a consistent framework for the integration of in silico results can lead to uncertainty and even contradictions across models and users, even for the same chemicals. In this context, a range of methods for integrating in silico results have been proposed on a statistical or case-specific basis. Read-across constitutes another strategy for deriving reference points or points of departure for hazard characterisation of untested chemicals, from the available experimental data for structurally-similar compounds, mostly using expert judgment. Recently a number of software systems have been developed to support experts in this task providing a formalised and structured procedure. Such a procedure could also facilitate further integration of the results generated from in silico models and read-across. This article discusses a framework on weight of evidence published by EFSA to identify the stepwise approach for systematic integration of results or values obtained from these "non-testing methods". Key criteria and best practices for selecting and evaluating individual in silico models are also described, together with the means to combining the results, taking into account any limitations, and identifying strategies that are likely to provide consistent results

    A Computational Pipeline for the Development of Multi-Marker Bio-Signature Panels and Ensemble Classifiers

    Get PDF
    BACKGROUND:Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble?RESULTS:The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity.CONCLUSION:Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway

    Advances and Challenges in Computational Target Prediction

    Get PDF
    Target deconvolution is a vital initial step in preclinical drug development to determine research focus and strategy. In this respect, computational target prediction is used to identify the most probable targets of an orphan ligand or the most similar targets to a protein under investigation. Applications range from the fundamental analysis of the mode-of-action over polypharmacology or adverse effect predictions to drug repositioning. Here, we provide a review on published ligand- and target-based as well as hybrid approaches for computational target prediction, together with current limitations and future directions.Medicinal Chemistr

    Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods

    Get PDF
    The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert?s knowledge in the selection process is needed for increase the confidence in the final set of descriptors.Fil: Martínez, María Jimena. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Computación Científica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Ponzoni, Ignacio. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Laboratorio de Investigación y Desarrollo en Computación Científica; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Diaz, Monica Fatima. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Planta Piloto de Ingeniería Química. Universidad Nacional del Sur. Planta Piloto de Ingeniería Química; ArgentinaFil: Vazquez, Gustavo Esteban. Universidad Católica del Uruguay. Facultad de Ingeniería y Tecnologías; Uruguay. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Soto, Axel Juan. Dalhousie University. Faculty of Computer Science; Canadá. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Topological regression as an interpretable and efficient tool for quantitative structureactivity relationship modeling

    Get PDF
    Quantitative structure-activity relationship (QSAR)modeling is a powerful tool for drug discovery, yet the lack of interpretability of commonly used QSAR models hinders their application inmolecular design.We propose a similaritybased regression framework, topological regression (TR), that offers a statistically grounded, computationally fast, and interpretable technique to predict drug responses. We compare the predictive performance of TR on 530 ChEMBL human target activity datasets against the predictive performance of deep-learning-based QSAR models. Our results suggest that our sparse TR model can achieve equal, if not better, performance than the deep learningbased QSAR models and provide better intuitive interpretation by extracting an approximate isometry between the chemical space of the drugs and their activity space

    Automatic machine learning:methods, systems, challenges

    Get PDF

    Automatic machine learning:methods, systems, challenges

    Get PDF
    This open access book presents the first comprehensive overview of general methods in Automatic Machine Learning (AutoML), collects descriptions of existing systems based on these methods, and discusses the first international challenge of AutoML systems. The book serves as a point of entry into this quickly-developing field for researchers and advanced students alike, as well as providing a reference for practitioners aiming to use AutoML in their work. The recent success of commercial ML applications and the rapid growth of the field has created a high demand for off-the-shelf ML methods that can be used easily and without expert knowledge. Many of the recent machine learning successes crucially rely on human experts, who select appropriate ML architectures (deep learning architectures or more traditional ML workflows) and their hyperparameters; however the field of AutoML targets a progressive automation of machine learning, based on principles from optimization and machine learning itself

    Simplified neural networks algorithms for function approximation and regression boosting on discrete input spaces

    Get PDF
    Function approximation capabilities of feedforward Neural Networks have been widely investigated over the past couple of decades. There has been quite a lot of work carried out in order to prove 'Universal Approximation Property' of these Networks. Most of the work in application of Neural Networks for function approximation has concentrated on problems where the input variables are continuous. However, there are many real world examples around us in which input variables constitute only discrete values, or a significant number of these input variables are discrete. Most of the learning algorithms proposed so far do not distinguish between different features of continuous and discrete input spaces and treat them in more or less the same way. Due to this reason, corresponding learning algorithms becomes unnecessarily complex and time consuming, especially when dealing with inputs mainly consisting of discrete variables. More recently, it has been shown that by focusing on special features of discrete input spaces, more simplified and robust algorithms can be developed. The main objective of this work is to address the function approximation capabilities of Artificial Neural Networks. There is particular emphasis on development, implementation, testing and analysis of new learning algorithms for the Simplified Neural Network approximation scheme for functions defined on discrete input spaces. By developing the corresponding learning algorithms, and testing with different benchmarking data sets, it is shown that comparing conventional multilayer neural networks for approximating functions on discrete input spaces, the proposed simplified neural network architecture and algorithms can achieve similar or better approximation accuracy. This is particularly the case when dealing with high dimensional-low sample cases, but with a much simpler architecture and less parameters. In order to investigate wider implications of simplified Neural Networks, their application has been extended to the Regression Boosting frame work. By developing, implementing and testing with empirical data it has been shown that these simplified Neural Network based algorithms also performs well in other Neural Network based ensembles.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore