236 research outputs found

    Ligand-based virtual screening using binary kernel discrimination

    Get PDF
    This paper discusses the use of a machine-learning technique called binary kernel discrimination (BKD) for virtual screening in drug- and pesticide-discovery programmes. BKD is compared with several other ligand-based tools for virtual screening in databases of 2D structures represented by fragment bit-strings, and is shown to provide an effective, and reasonably efficient, way of prioritising compounds for biological screening

    Development and Interpretation of Machine Learning Models for Drug Discovery

    Get PDF
    In drug discovery, domain experts from different fields such as medicinal chemistry, biology, and computer science often collaborate to develop novel pharmaceutical agents. Computational models developed in this process must be correct and reliable, but at the same time interpretable. Their findings have to be accessible by experts from other fields than computer science to validate and improve them with domain knowledge. Only if this is the case, the interdisciplinary teams are able to communicate their scientific results both precisely and intuitively. This work is concerned with the development and interpretation of machine learning models for drug discovery. To this end, it describes the design and application of computational models for specialized use cases, such as compound profiling and hit expansion. Novel insights into machine learning for ligand-based virtual screening are presented, and limitations in the modeling of compound potency values are highlighted. It is shown that compound activity can be predicted based on high-dimensional target profiles, without the presence of molecular structures. Moreover, support vector regression for potency prediction is carefully analyzed, and a systematic misprediction of highly potent ligands is discovered. Furthermore, a key aspect is the interpretation and chemically accessible representation of the models. Therefore, this thesis focuses especially on methods to better understand and communicate modeling results. To this end, two interactive visualizations for the assessment of naive Bayes and support vector machine models on molecular fingerprints are presented. These visual representations of virtual screening models are designed to provide an intuitive chemical interpretation of the results

    Exploring the potential of 3D Zernike descriptors and SVM for protein\u2013protein interface prediction

    Get PDF
    Abstract Background The correct determination of protein–protein interaction interfaces is important for understanding disease mechanisms and for rational drug design. To date, several computational methods for the prediction of protein interfaces have been developed, but the interface prediction problem is still not fully understood. Experimental evidence suggests that the location of binding sites is imprinted in the protein structure, but there are major differences among the interfaces of the various protein types: the characterising properties can vary a lot depending on the interaction type and function. The selection of an optimal set of features characterising the protein interface and the development of an effective method to represent and capture the complex protein recognition patterns are of paramount importance for this task. Results In this work we investigate the potential of a novel local surface descriptor based on 3D Zernike moments for the interface prediction task. Descriptors invariant to roto-translations are extracted from circular patches of the protein surface enriched with physico-chemical properties from the HQI8 amino acid index set, and are used as samples for a binary classification problem. Support Vector Machines are used as a classifier to distinguish interface local surface patches from non-interface ones. The proposed method was validated on 16 classes of proteins extracted from the Protein–Protein Docking Benchmark 5.0 and compared to other state-of-the-art protein interface predictors (SPPIDER, PrISE and NPS-HomPPI). Conclusions The 3D Zernike descriptors are able to capture the similarity among patterns of physico-chemical and biochemical properties mapped on the protein surface arising from the various spatial arrangements of the underlying residues, and their usage can be easily extended to other sets of amino acid properties. The results suggest that the choice of a proper set of features characterising the protein interface is crucial for the interface prediction task, and that optimality strongly depends on the class of proteins whose interface we want to characterise. We postulate that different protein classes should be treated separately and that it is necessary to identify an optimal set of features for each protein class

    Target prediction utilising negative bioactivity data covering large chemical space.

    Get PDF
    BACKGROUND: In silico analyses are increasingly being used to support mode-of-action investigations; however many such approaches do not utilise the large amounts of inactive data held in chemogenomic repositories. The objective of this work is concerned with the integration of such bioactivity data in the target prediction of orphan compounds to produce the probability of activity and inactivity for a range of targets. To this end, a novel human bioactivity data set was constructed through the assimilation of over 195 million bioactivity data points deposited in the ChEMBL and PubChem repositories, and the subsequent application of a sphere-exclusion selection algorithm to oversample presumed inactive compounds. RESULTS: A Bernoulli Naïve Bayes algorithm was trained using the data and evaluated using fivefold cross-validation, achieving a mean recall and precision of 67.7 and 63.8 % for active compounds and 99.6 and 99.7 % for inactive compounds, respectively. We show the performances of the models are considerably influenced by the underlying intraclass training similarity, the size of a given class of compounds, and the degree of additional oversampling. The method was also validated using compounds extracted from WOMBAT producing average precision-recall AUC and BEDROC scores of 0.56 and 0.85, respectively. Inactive data points used for this test are based on presumed inactivity, producing an approximated indication of the true extrapolative ability of the models. A distance-based applicability domain analysis was also conducted; indicating an average Tanimoto Coefficient distance of 0.3 or greater between a test and training set can be used to give a global measure of confidence in model predictions. A final comparison to a method trained solely on active data from ChEMBL performed with precision-recall AUC and BEDROC scores of 0.45 and 0.76. CONCLUSIONS: The inclusion of inactive data for model training produces models with superior AUC and improved early recognition capabilities, although the results from internal and external validation of the models show differing performance between the breadth of models. The realised target prediction protocol is available at https://github.com/lhm30/PIDGIN.Graphical abstractThe inclusion of large scale negative training data for in silico target prediction improves the precision and recall AUC and BEDROC scores for target models.The authors thank Krishna C. Bulusu for proof reading the manuscript. LHM would like to thank BBSRC and AstraZeneca and for their funding. GD thanks EPSRC and Eli Lilly for funding.This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1186/s13321-015-0098-

    In Silico Target Prediction by Training Naive Bayesian Models on Chemogenomics Databases

    Get PDF
    Submitted to the faculty of the Chemical Informatics Graduate Program in partial fulfillment of the requirements for the degree Master of Science in the School of Informatics,Indiana University, December 2005The completion of Human Genome Project is seen as a gateway to the discovery of novel drug targets (Jacoby, Schuffenhauer, & Floersheim, 2003). How much of this information is actually translated into knowledge, e.g., the discovery of novel drug targets, is yet to be seen. The traditional route of drug discovery has been from target to compound. Conventional research techniques are focused around studying animal and cellular models which is followed by the development of a chemical concept. Modern approaches that have evolved as a result of progress in molecular biology and genomics start out with molecular targets which usually originate from the discovery of a new gene .Subsequent target validation to establish suitability as a drug target is followed by high throughput screening assays in order to identify new active chemical entities (Hofbauer, 1997). In contrast, chemogenomics takes the opposite approach to drug discovery (Jacoby, Schuffenhauer, & Floersheim, 2003). It puts to the forefront chemical entities as probes to study their effects on biological targets and then links these effects to the genetic pathways of these targets (Figure 1a). The goal of chemogenomics is to rapidly identify new drug molecules and drug targets by establishing chemical and biological connections. Just as classical genetic experiments are classified into forward and reverse, experimental chemogenomics methods can be distinguished as forward and reverse depending on the direction of investigative process i.e. from phenotype to target or from target to phenotype respectively (Jacoby, Schuffenhauer, & Floersheim, 2003). The identification and characterization of protein targets are critical bottlenecks in forward chemogenomics experiments. Currently, methods such as affinity matrix purification (Taunton, Hassig, & Schreiber, 1996) and phage display (Sche, McKenzie, White, & Austin, 1999) are used to determine targets for compounds. None of the current techniques used for target identification after the initial screening are efficient. In silico methods can provide complementary and efficient ways to predict targets by using chemogenomics databases to obtain information about chemical structures and target activities of compounds. Annotated chemogenomics databases integrate chemical and biological domains and can provide a powerful tool to predict and validate new targets for compounds with unknown effects (Figure 1b). A chemogenomics database contains both chemical properties and biological activities associated with a compound. The MDL Drug Data Report (MDDR) (Molecular Design Ltd., San Leandro, California) is one of the well known and widely used databases that contains chemical structures and corresponding biological activities of drug like compounds. The relevance and quality of information that can be derived from these databases depends on their annotation schemes as well as the methods that are used for mining this data. In recent years chemists and biologist have used such databases to carry out similarity searches and lookup biological activities for compounds that are similar to the probe molecules for a given assay. With the emergence of new chemogenomics databases that follow a well-structured and consistent annotation scheme, new automated target prediction methods are possible that can give insights to the biological world based on structural similarity between compounds. The usefulness of such databases lies not only in predicting targets, but also in establishing the genetic connections of the targets discovered, as a consequence of the prediction. The ability to perform automated target prediction relies heavily on a synergy of very recent technologies, which includes: i) Highly structured and consistently annotated chemogenomics databases. Many such databases have surfaced very recently; WOMBAT (Sunset Molecular Discovery LLC, Santa Fe, New Mexico), KinaseChemBioBase (Jubilant Biosys Ltd., Bangalore, India) and StARLITe (Inpharmatica Ltd., London, UK), to name a few. ii) Chemical descriptors (Xue & Bajorath, 2000) that capture the structure-activity relationship of the molecules as well as computational techniques (Kitchen, Stahura, & Bajorath, 2004) that are specifically tailored to extract information from these descriptors. iii) Data pipelining environments that are fast, integrate multiple computational steps, and support large datasets. A combination of all these technologies may be employed to bridge the gap between chemical and biological domains which remains a challenge in the pharmaceutical industry

    In silico target prediction for elucidating the mode of action of herbicides including prospective validation.

    Get PDF
    The rapid emergence of pesticide resistance has given rise to a demand for herbicides with new mode of action (MoA). In the agrochemical sector, with the availability of experimental high throughput screening (HTS) data, it is now possible to utilize in silico\textit{in silico} target prediction methods in the early discovery phase to suggest the MoA of a compound via\textit{via} data mining of bioactivity data. While having been established in the pharmaceutical context, in the agrochemical area this approach poses rather different challenges, as we have found in this work, partially due to different chemistry, but even more so due to different (usually smaller) amounts of data, and different ways of conducting HTS. With the aim to apply computational methods for facilitating herbicide target identification, 48,000 bioactivity data against 16 herbicide targets were processed to train Laplacian modified Naïve Bayesian (NB) classification models. The herbicide target prediction model ("HerbiMod") is an ensemble of 16 binary classification models which are evaluated by internal, external and prospective validation sets. In addition to the experimental inactives, 10,000 random agrochemical inactives were included in the training process, which showed to improve the overall balanced accuracy of our models up to 40%. For all the models, performance in terms of balanced accuracy of ≥80% was achieved in five-fold cross validation. Ranking target predictions was addressed by means of z-scores which improved predictivity over using raw scores alone. An external testset of 247 compounds from ChEMBL and a prospective testset of 394 compounds from BASF SE tested against five well studied herbicide targets (ACC, ALS, HPPD, PDS and PROTOX) were used for further validation. Only 4% of the compounds in the external testset lied in the applicability domain and extrapolation (and correct prediction) was hence impossible, which on one hand was surprising, and on the other hand illustrated the utilization of using applicability domains in the first place. However, performance better than 60% in balanced accuracy was achieved on the prospective testset, where all the compounds fell within the applicability domain, and which hence underlines the possibility of using target prediction also in the area of agrochemicals.BASF SE, Unilever, European Research Council (Starting Grant ERC-2013-StG-336159 MIXTURE

    Molecular Similarity and Xenobiotic Metabolism

    Get PDF
    MetaPrint2D, a new software tool implementing a data-mining approach for predicting sites of xenobiotic metabolism has been developed. The algorithm is based on a statistical analysis of the occurrences of atom centred circular fingerprints in both substrates and metabolites. This approach has undergone extensive evaluation and been shown to be of comparable accuracy to current best-in-class tools, but is able to make much faster predictions, for the first time enabling chemists to explore the effects of structural modifications on a compound’s metabolism in a highly responsive and interactive manner.MetaPrint2D is able to assign a confidence score to the predictions it generates, based on the availability of relevant data and the degree to which a compound is modelled by the algorithm.In the course of the evaluation of MetaPrint2D a novel metric for assessing the performance of site of metabolism predictions has been introduced. This overcomes the bias introduced by molecule size and the number of sites of metabolism inherent to the most commonly reported metrics used to evaluate site of metabolism predictions.This data mining approach to site of metabolism prediction has been augmented by a set of reaction type definitions to produce MetaPrint2D-React, enabling prediction of the types of transformations a compound is likely to undergo and the metabolites that are formed. This approach has been evaluated against both historical data and metabolic schemes reported in a number of recently published studies. Results suggest that the ability of this method to predict metabolic transformations is highly dependent on the relevance of the training set data to the query compounds.MetaPrint2D has been released as an open source software library, and both MetaPrint2D and MetaPrint2D-React are available for chemists to use through the Unilever Centre for Molecular Science Informatics website.----Boehringer-Ingelhie

    Sobiva omaduste profiiliga ühendite tuvastamine keemiliste struktuuride andmekogudest

    Get PDF
    Keemiliste ühendite digitaalsete andmebaaside kasutuselevõtuga kaasneb vajadus leida neist arvutuslikke vahendeid kasutades sobivate omadustega molekule. Probleem on eriti huvipakkuv ravimitööstuses, kus aja- ja ressursimahukate katsete asendamine arvutustega, võimaldab märkimisväärset säästu. Kuigi tänapäevaste arvutusmeetodite piiratud võimsuse tõttu ei ole lähemas tulevikus võimalik kogu ravimidisaini protsessi algusest lõpuni arvutitesse ümber kolida, on lugu teine, kui vaadelda suuri andmekogusid. Arvutusmeetod, mis töötab teadaoleva statistilise vea piires, visates välja mõne sobiva ühendi ja lugedes mõni ekslikult aktiivseks, tihendab lõppkokkuvõttes andmekomplekti tuntaval määral huvitavate ühendite suhtes. Seetõttu on ravimiarenduse lihtsamate ja vähenõudlikkumade etappide puhul, nagu juhtühendite või ravimikandidaatide leidmine, edukalt võimalik rakendada arvutuslikke vahendeid. Selline tegevus on tuntud virtuaalsõelumisena ning käesolevasse töösse on sellest avarast ja kiiresti arenevast valdkonnast valitud mõningad suunad, ning uuritud nende võimekust ja tulemuslikkust erinevate projektide raames. Töö tulemusena on valminud arvutusmudelid teatud tüüpi ühendite HIV proteaasi vastase aktiivsuse ja tsütotoksilisuse hindamiseks; koostatud uus sõelumismeetod; leitud potentsiaalsed ligandid HIV proteaasile ja pöördtranskriptaasile; ning kokku pandud farmakokineetiliste filtritega eeltöödeldud andmekomplekt – mugav lähtepositsioon edasisteks töödeks.With the implementation of digital chemical compound libraries, creates the need for finding compounds from them that fit the desired profile. The problem is of particular interest in drug design, where replacing the resource-intensive experiments with computational methods, would result in significant savings in time and cost. Although due to the limitations of current computational methods, it is not possible in foreseeable future to transfer all of the drug development process into computers, it is a different story with large molecular databases. An in silico method, working within a known error margin, is still capable of significantly concentrating the data set in terms of attractive compounds. That allows the use of computational methods in less stringent steps of drug development, such as finding lead compounds or drug candidates. This approach is known as virtual screening, and today it is a vast and prospective research area comprising of several paradigms and numerous individual methods. The present thesis takes a closer look on some of them, and evaluates their performance in the course of several projects. The results of the thesis include computational models to estimate the HIV protease inhibition activity and cytotoxicity of certain type of compounds; a few prospective ligands for HIV protease and reverse transcriptase; pre-filtered dataset of compounds – convenient starting point for subsequent projects; and finally a new virtual screening method was developed
    corecore