319 research outputs found

    ARTIFICIAL NEURAL NETWORKS: FUNCTIONINGANDAPPLICATIONS IN PHARMACEUTICAL INDUSTRY

    Get PDF
    Artificial Neural Network (ANN) technology is a group of computer designed algorithms for simulating neurological processing to process information and produce outcomes like the thinking process of humans in learning, decision making and solving problems. The uniqueness of ANN is its ability to deliver desirable results even with the help of incomplete or historical data results without a need for structured experimental design by modeling and pattern recognition. It imbibes data through repetition with suitable learning models, similarly to humans, without actual programming. It leverages its ability by processing elements connected with the user given inputs which transfers as a function and provides as output. Moreover, the present output by ANN is a combinational effect of data collected from previous inputs and the current responsiveness of the system. Technically, ANN is associated with highly monitored network along with a back propagation learning standard. Due to its exceptional predictability, the current uses of ANN can be applied to many more disciplines in the area of science which requires multivariate data analysis. In the pharmaceutical process, this flexible tool is used to simulate various non-linear relationships. It also finds its application in the enhancement of pre-formulation parameters for predicting physicochemical properties of drug substances. It also finds its applications in pharmaceutical research, medicinal chemistry, QSAR study, pharmaceutical instrumental engineering. Its multi-objective concurrent optimization is adopted in the drug discovery process, protein structure, rational data analysis also

    Prior Knowledge for Predictive Modeling: The Case of Acute Aquatic Toxicity

    Get PDF
    Early assessment of the potential impact of chemicals on health and the environment requires toxicological properties of the molecules. Predictive modeling is often used to estimate the property values\ua0in silico\ua0from pre-existing experimental data, which is often scarce and uncertain. One of the ways to advance the predictive modeling procedure might be the use of knowledge existing in the field. Scientific publications contain a vast amount of knowledge. However, the amount of manual work required to process the enormous volumes of information gathered in scientific articles might hinder its utilization. This work explores the opportunity of semiautomated knowledge extraction from scientific papers and investigates a few potential ways of its use for predictive modeling. The knowledge extraction and predictive modeling are applied to the field of acute aquatic toxicity. Acute aquatic toxicity is an important parameter of the safety assessment of chemicals. The extensive amount of diverse information existing in the field makes acute aquatic toxicity an attractive area for investigation of knowledge use for predictive modeling. The work demonstrates that the knowledge collection and classification procedure could be useful in hybrid modeling studies concerning the model and predictor selection, addressing data gaps, and evaluation of models’ performance

    Transformative Machine Learning

    Get PDF
    The key to success in machine learning (ML) is the use of effective data representations. Traditionally, data representations were hand-crafted. Recently it has been demonstrated that, given sufficient data, deep neural networks can learn effective implicit representations from simple input representations. However, for most scientific problems, the use of deep learning is not appropriate as the amount of available data is limited, and/or the output models must be explainable. Nevertheless, many scientific problems do have significant amounts of data available on related tasks, which makes them amenable to multi-task learning, i.e. learning many related problems simultaneously. Here we propose a novel and general representation learning approach for multi-task learning that works successfully with small amounts of data. The fundamental new idea is to transform an input intrinsic data representation (i.e., handcrafted features), to an extrinsic representation based on what a pre-trained set of models predict about the examples. This transformation has the dual advantages of producing significantly more accurate predictions, and providing explainable models. To demonstrate the utility of this transformative learning approach, we have applied it to three real-world scientific problems: drug-design (quantitative structure activity relationship learning), predicting human gene expression (across different tissue types and drug treatments), and meta-learning for machine learning (predicting which machine learning methods work best for a given problem). In all three problems, transformative machine learning significantly outperforms the best intrinsic representation

    The Use of Computational Methods in the Grouping and Assessment of Chemicals - Preliminary Investigations

    Get PDF
    This document presents a perspective of how computational approaches could potentially be used in the grouping and assessment of chemicals, and especially in the application of read-across and the development of chemical categories. The perspective is based on experience gained by the authors during 2006 and 2007, when the Joint Research Centre's European Chemicals Bureau was directly involved in the drafting of technical guidance on the applicability of computational methods under REACH. Some of the experience gained and ideas developed resulted from a number of research-based case studies conducted in-house during 2006 and the first half of 2007. The case studies were performed to explore the possible applications of computational methods in the assessment of chemicals and to contribute to the development of technical guidance. Not all of the methods explored and ideas developed are explicitly included in the final guidance documentation for REACH. Many of the methods are novel, and are still being refined and assessed by the scientific community. At present, many of the methods have not been tried and tested in the regulatory context. The authors therefore hope that the perspective and case studies compiled in this document, whilst not intended to serve as guidance, will nevertheless provide an input to further research efforts aimed at developing computational methods, and at exploring their potential applicability in regulatory assessment of chemicals.JRC.I.3-Toxicology and chemical substance

    Hybrid Computational Toxicology Models for Regulatory Risk Assessment

    Get PDF
    Computational toxicology is the development of quantitative structure activity relationship (QSAR) models that relate a quantitative measure of chemical structure to a biological effect. In silico QSAR tools are widely accepted as a faster alternative to time-consuming clinical and animal testing methods for regulatory risk assessment of xenobiotics used in consumer products. However, different QSAR tools often make contrasting predictions for a new xenobiotic and may also vary in their predictive ability for different class of xenobiotics. This makes their use challenging, especially in regulatory applications, where transparency and interpretation of predictions play a crucial role in the development of safety assessment decisions. Recent efforts in computational toxicology involve the use of in vitro data, which enables better insight into the mode of action of xenobiotics and identification of potential mechanism(s) of toxicity. To ensure that in silico models are robust and reliable before they can be used for regulatory applications, the registration, evaluation, authorization and restriction of chemicals (REACH) initiative and the organization for economic co-operation and development (OECD) have established legislative guidelines for their validation. This dissertation addresses the limitations in the use of current QSAR tools for regulatory risk assessment within REACH/OECD guidelines. The first contribution is an ensemble model that combines the predictions from four QSAR tools for improving the quality of predictions. The model presents a novel mechanism to select a desired trade-off between false positive and false negative predictions. The second contribution is the introduction of quantitative biological activity relationship (QBAR) models that use mechanistically relevant in vitro data as biological descriptors for development of computational toxicology models. Two novel applications are presented that demonstrate that QBAR models can sufficiently predict carcinogenicity when QSAR model predictions may fail. The third contribution is the development of two novel methods which explore the synergistic use of structural and biological similarity data for carcinogenicity prediction. Two applications are presented that demonstrate the feasibility of proposed methods within REACH/OECD guidelines. These contributions lay the foundation for development of novel mechanism based in silico tools for mechanistically complex toxic endpoints to successfully advance the field of computational toxicology
    corecore