98 research outputs found

    The Evaluation Of Molecular Similarity And Molecular Diversity Methods Using Biological Activity Data

    Get PDF
    This paper reviews the techniques available for quantifying the effectiveness of methods for molecule similarity and molecular diversity, focusing in particular on similarity searching and on compound selection procedures. The evaluation criteria considered are based on biological activity data, both qualitative and quantitative, with rather different criteria needing to be used depending on the type of data available

    UNDERSTANDING CONDITIONAL MODES OF ACTIONS IN CHEMICAL-INDUCED TOXICITY USING RULE MODELS

    Get PDF
    It is estimated that 115 million animals are used in experimental testing each year. Hence, shifting efforts toward alternative methods for toxicity assessment is essential. However, slow regulatory acceptance of new approaches is governed by knowledge gaps in toxicity modes of action. In this thesis, I describe these challenges and the use of in vitro screening as an alternative of animal testing. I also discuss common data-based methods to derive hypotheses about toxicity modes of actions, and the associated limitations in capturing multiple biological perturbations. I applied novel data-based workflows, using rule models, to prioritize in vitro assays predictive of toxicity as well as to detect significant polypharmacology profiles. I explain how constraints were applied to rule-based models to inform meaningful mechanistic interpretation for two toxicity endpoints: rat hepatotoxicity and acute toxicity. I compared assays selected, by rules, for predicting hepatotoxicity with endpoints used in in vitro models from commercial sources. An overlap was observed including cytochrome activity, mitochondrial toxicity and immunological responses. However, nuclear receptor activity, identified in rules, is not currently covered in commercial setups. I also demonstrate that endocrine disruption endpoints extrapolate better into in vivo toxicity when a set of specific conditions are met, such as physicochemical properties associated with good bioavailability. Next, I examined synergistic interactions between conditions in rules describing acute toxicity. I gained novel insights into how specific stressors potentiate the perturbation by known key events, such as acetylcholinesterase inhibition and neuro-signalling disruption. I show that examining polypharmacology profiles is particularly important at low bioactive potencies. Further, the overall predictive performance of rules describing acute toxicity was tested against a benchmark Random Forest model in a conformal prediction framework. Irrespective to the data type used in the training, the models were prone to bias over compounds promiscuity, by which high promiscuous compounds were more likely to be predicted as toxic. Overall, the studies conducted in this thesis provide novel insights into molecular mechanisms of toxicity, namely hepatotoxicity and acute toxicity, and with regards to chemical properties and polypharmacology. This knowledge can be used to improve the utility and design of alternative methods for toxicity, and hence, accelerate the regulatory acceptance.Islamic Development Bank Cambridge Trust Fun

    Repositioning drugs for rare immune diseases: Hopes and challenges for a precision medicine

    Get PDF
    Human primary immunodeficiency diseases (PIDs) are a large group of rare diseases and are characterized by a great genetic and phenotypic heterogeneity. A large subset of PIDs is genetically defined, which has a crucial impact for the understanding of the molecular basis of disease and the development of precision medicine. Discovery and development of new therapies for rare diseases has long been de-privileged due to the length and cost of the processes involved. Interest has increased due to stimulatory regulatory and supportive reimbursement environments enabling viable business models. Advancements in biomedical and computational sciences enable the development of rational, designed approaches for identification of novel indications of already approved drugs allowing faster delivery of new medicines. Drug repositioning is based either on clinical analogies of diseases or on understanding of the molecular mode of drug action and mechanisms of the disease. All of these are the basis for the development of precision medicine

    Substructural Analysis Using Evolutionary Computing Techniques

    Get PDF
    Substructural analysis (SSA) was one of the very first machine learning techniques to be applied to chemoinformatics in the area of virtual screening. For this method, given a set of compounds typically defined by their fragment occurrence data (such as 2D fingerprints). The SSA computes weights for each of the fragments which outlines its contribution to the activity (or inactivity) of compounds containing that fragment. The overall probability of activity for a compound is then computed by summing up or combining the weights for the fragments present in the compound. A variety of weighting schemes based on specific relationship-bound equations are available for this purpose. This thesis identifies uplift to the effectiveness of SSA, using two evolutionary computation methods based on genetic traits, particularly the genetic algorithm (GA) and genetic programming (GP). Building on previous studies, it was possible to analyse and compare ten published SSA weighting schemes based on a simulated virtual screening experiment. The analysis showed the most effective weighting scheme to be the R4 equation which was a part of document-based weighting schemes. A second experiment was carried out to investigate the application of GA-based weighting scheme for the SSA in comparison to an experiment using the R4 weighting scheme. The GA algorithm is simple in concept focusing purely on suitable weight generation and effective in operation. The findings show that the GA-based SSA is superior to the R4-based SSA, both in terms of active compound retrieval rate and predictive performance. A third experiment investigated the genetic application via a GP-based SSA. Rigorous experiment results showed that the GP was found to be superior to the existing SSA weighting schemes. In general, however, the GP-based SSA was found to be less effective than the GA-based SSA. A final experimented is described in this thesis which sought to explore the feasibility of data fusion on both the GA and GP. It is a method producing a final ranking list from multiple sets of ranking lists, based on several fusion rules. The results indicate that data fusion is a good method to boost GA-and GP-based SSA searching. The RKP rule was considered the most effective fusion rule

    Molecular Similarity and Xenobiotic Metabolism

    Get PDF
    MetaPrint2D, a new software tool implementing a data-mining approach for predicting sites of xenobiotic metabolism has been developed. The algorithm is based on a statistical analysis of the occurrences of atom centred circular fingerprints in both substrates and metabolites. This approach has undergone extensive evaluation and been shown to be of comparable accuracy to current best-in-class tools, but is able to make much faster predictions, for the first time enabling chemists to explore the effects of structural modifications on a compound’s metabolism in a highly responsive and interactive manner.MetaPrint2D is able to assign a confidence score to the predictions it generates, based on the availability of relevant data and the degree to which a compound is modelled by the algorithm.In the course of the evaluation of MetaPrint2D a novel metric for assessing the performance of site of metabolism predictions has been introduced. This overcomes the bias introduced by molecule size and the number of sites of metabolism inherent to the most commonly reported metrics used to evaluate site of metabolism predictions.This data mining approach to site of metabolism prediction has been augmented by a set of reaction type definitions to produce MetaPrint2D-React, enabling prediction of the types of transformations a compound is likely to undergo and the metabolites that are formed. This approach has been evaluated against both historical data and metabolic schemes reported in a number of recently published studies. Results suggest that the ability of this method to predict metabolic transformations is highly dependent on the relevance of the training set data to the query compounds.MetaPrint2D has been released as an open source software library, and both MetaPrint2D and MetaPrint2D-React are available for chemists to use through the Unilever Centre for Molecular Science Informatics website.----Boehringer-Ingelhie

    In Silico Target Prediction by Training Naive Bayesian Models on Chemogenomics Databases

    Get PDF
    Submitted to the faculty of the Chemical Informatics Graduate Program in partial fulfillment of the requirements for the degree Master of Science in the School of Informatics,Indiana University, December 2005The completion of Human Genome Project is seen as a gateway to the discovery of novel drug targets (Jacoby, Schuffenhauer, & Floersheim, 2003). How much of this information is actually translated into knowledge, e.g., the discovery of novel drug targets, is yet to be seen. The traditional route of drug discovery has been from target to compound. Conventional research techniques are focused around studying animal and cellular models which is followed by the development of a chemical concept. Modern approaches that have evolved as a result of progress in molecular biology and genomics start out with molecular targets which usually originate from the discovery of a new gene .Subsequent target validation to establish suitability as a drug target is followed by high throughput screening assays in order to identify new active chemical entities (Hofbauer, 1997). In contrast, chemogenomics takes the opposite approach to drug discovery (Jacoby, Schuffenhauer, & Floersheim, 2003). It puts to the forefront chemical entities as probes to study their effects on biological targets and then links these effects to the genetic pathways of these targets (Figure 1a). The goal of chemogenomics is to rapidly identify new drug molecules and drug targets by establishing chemical and biological connections. Just as classical genetic experiments are classified into forward and reverse, experimental chemogenomics methods can be distinguished as forward and reverse depending on the direction of investigative process i.e. from phenotype to target or from target to phenotype respectively (Jacoby, Schuffenhauer, & Floersheim, 2003). The identification and characterization of protein targets are critical bottlenecks in forward chemogenomics experiments. Currently, methods such as affinity matrix purification (Taunton, Hassig, & Schreiber, 1996) and phage display (Sche, McKenzie, White, & Austin, 1999) are used to determine targets for compounds. None of the current techniques used for target identification after the initial screening are efficient. In silico methods can provide complementary and efficient ways to predict targets by using chemogenomics databases to obtain information about chemical structures and target activities of compounds. Annotated chemogenomics databases integrate chemical and biological domains and can provide a powerful tool to predict and validate new targets for compounds with unknown effects (Figure 1b). A chemogenomics database contains both chemical properties and biological activities associated with a compound. The MDL Drug Data Report (MDDR) (Molecular Design Ltd., San Leandro, California) is one of the well known and widely used databases that contains chemical structures and corresponding biological activities of drug like compounds. The relevance and quality of information that can be derived from these databases depends on their annotation schemes as well as the methods that are used for mining this data. In recent years chemists and biologist have used such databases to carry out similarity searches and lookup biological activities for compounds that are similar to the probe molecules for a given assay. With the emergence of new chemogenomics databases that follow a well-structured and consistent annotation scheme, new automated target prediction methods are possible that can give insights to the biological world based on structural similarity between compounds. The usefulness of such databases lies not only in predicting targets, but also in establishing the genetic connections of the targets discovered, as a consequence of the prediction. The ability to perform automated target prediction relies heavily on a synergy of very recent technologies, which includes: i) Highly structured and consistently annotated chemogenomics databases. Many such databases have surfaced very recently; WOMBAT (Sunset Molecular Discovery LLC, Santa Fe, New Mexico), KinaseChemBioBase (Jubilant Biosys Ltd., Bangalore, India) and StARLITe (Inpharmatica Ltd., London, UK), to name a few. ii) Chemical descriptors (Xue & Bajorath, 2000) that capture the structure-activity relationship of the molecules as well as computational techniques (Kitchen, Stahura, & Bajorath, 2004) that are specifically tailored to extract information from these descriptors. iii) Data pipelining environments that are fast, integrate multiple computational steps, and support large datasets. A combination of all these technologies may be employed to bridge the gap between chemical and biological domains which remains a challenge in the pharmaceutical industry

    TI2BioP — Topological Indices to BioPolymers. A Graphical– Numerical Approach for Bioinformatics

    Get PDF
    We developed a new graphical–numerical method called TI2BioP (Topological Indices to BioPolymers) to estimate topological indices (TIs) from two-dimensional (2D) graphical approaches for the natural biopolymers DNA, RNA and proteins The methodology mainly turns long biopolymeric sequences into 2D artificial graphs such as Cartesian and four-color maps but also reads other 2D graphs from the thermodynamic folding of DNA/RNA strings inferred from other programs. The topology of such 2D graphs is either encoded by node or adjacency matrixes for the calculation of the spectral moments as TIs. These numerical indices were used to build up alignment-free models to the functional classification of biosequences and to calculate alignment-free distances for phylogenetic purposes. The performance of the method was evaluated in highly diverse gene/protein classes, which represents a challenge for current bioinformatics algorithms. TI2BioP generally outperformed classical bioinformatics algorithms in the functional classification of Bacteriocins, ribonucleases III (RNases III), genomic internal transcribed spacer II (ITS2) and adenylation domains (A-domains) of nonribosomal peptide synthetases (NRPS) allowing the detection of new members in these target gene/protein classes. TI2BioP classification performance was contrasted and supported by predictions with sensitive alignment-based algorithms and experimental outcomes, respectively. The new ITS2 sequence isolated from Petrakia sp. was used in our graphical–numerical approach to estimate alignment-free distances for phylogenetic inferences. Despite TI2BioP having been developed for application in bioinformatics, it can be extended to predict interesting features of other biopolymers than DNA and protein sequences. TI2BioP version 2.0 is freely available from http://ti2biop.sourceforge.net/
    • …
    corecore