61 research outputs found

    In silico ADME in drug design – enhancing the impact

    Get PDF
    Each year the pharmaceutical industry makes thousands of compounds, many of which do not meet the desired efficacy or pharmacokinetic properties, describing the absorption, distribution, metabolism and excretion (ADME) behavior. Parameters such as lipophilicity, solubility and metabolic stability can be measured in high throughput in vitro assays. However, a compound needs to be synthesized in order to be tested. In silico models for these endpoints exist, although with varying quality. Such models can be used before synthesis and, together with a potency estimation, influence the decision to make a compound. In practice, it appears that often only one or two predicted properties are considered prior to synthesis, usually including a prediction of lipophilicity. While it is important to use all information when deciding which compound to make, it is somewhat challenging to combine multiple predictions unambiguously. This work investigates the possibility of combining in silico ADME predictions to define the minimum required potency for a specified human dose with sufficient confidence. Using a set of drug discovery compounds, in silico predictions were utilized to compare the relative ranking based on minimum potency calculation with the outcomes from the selection of lead compounds. The approach was also tested on a set of marketed drugs and the influence of the input parameters investigated

    Deconvoluting kinase inhibitor induced cardiotoxicity

    Get PDF
    Many drugs designed to inhibit kinases have their clinical utility limited by cardiotoxicity-related label warnings or prescribing restrictions. While this liability is widely recognized, designing safer kinase inhibitors (KI) requires knowledge of the causative kinase(s). Efforts to unravel the kinases have encountered pharmacology with nearly prohibitive complexity. At therapeutically relevant concentrations, KIs show promiscuity distributed across the kinome. Here, to overcome this complexity, 65 KIs with known kinome-scale polypharmacology profiles were assessed for effects on cardiomyocyte (CM) beating. Changes in human iPSC-CM beat rate and amplitude were measured using label-free cellular impedance. Correlations between beat effects and kinase inhibition profiles were mined by computation analysis (Matthews Correlation Coefficient) to identify associated kinases. Thirty kinases met criteria of having (1) pharmacological inhibition correlated with CM beat changes, (2) expression in both human-induced pluripotent stem cell-derived cardiomyocytes and adult heart tissue, and (3) effects on CM beating following single gene knockdown. A subset of these 30 kinases were selected for mechanistic follow up. Examples of kinases regulating processes spanning the excitation–contraction cascade were identified, including calcium flux (RPS6KA3, IKBKE) and action potential duration (MAP4K2). Finally, a simple model was created to predict functional cardiotoxicity whereby inactivity at three sentinel kinases (RPS6KB1, FAK, STK35) showed exceptional accuracy in vitro and translated to clinical KI safety data. For drug discovery, identifying causative kinases and introducing a predictive model should transform the ability to design safer KI medicines. For cardiovascular biology, discovering kinases previously unrecognized as influencing cardiovascular biology should stimulate investigation of underappreciated signaling pathways

    Automatic Filtering and Substantiation of Drug Safety Signals

    Get PDF
    Drug safety issues pose serious health threats to the population and constitute a major cause of mortality worldwide. Due to the prominent implications to both public health and the pharmaceutical industry, it is of great importance to unravel the molecular mechanisms by which an adverse drug reaction can be potentially elicited. These mechanisms can be investigated by placing the pharmaco-epidemiologically detected adverse drug reaction in an information-rich context and by exploiting all currently available biomedical knowledge to substantiate it. We present a computational framework for the biological annotation of potential adverse drug reactions. First, the proposed framework investigates previous evidences on the drug-event association in the context of biomedical literature (signal filtering). Then, it seeks to provide a biological explanation (signal substantiation) by exploring mechanistic connections that might explain why a drug produces a specific adverse reaction. The mechanistic connections include the activity of the drug, related compounds and drug metabolites on protein targets, the association of protein targets to clinical events, and the annotation of proteins (both protein targets and proteins associated with clinical events) to biological pathways. Hence, the workflows for signal filtering and substantiation integrate modules for literature and database mining, in silico drug-target profiling, and analyses based on gene-disease networks and biological pathways. Application examples of these workflows carried out on selected cases of drug safety signals are discussed. The methodology and workflows presented offer a novel approach to explore the molecular mechanisms underlying adverse drug reactions

    In silico toxicology protocols

    Get PDF
    The present publication surveys several applications of in silico (i.e., computational) toxicology approaches across different industries and institutions. It highlights the need to develop standardized protocols when conducting toxicity-related predictions. This contribution articulates the information needed for protocols to support in silico predictions for major toxicological endpoints of concern (e.g., genetic toxicity, carcinogenicity, acute toxicity, reproductive toxicity, developmental toxicity) across several industries and regulatory bodies. Such novel in silico toxicology (IST) protocols, when fully developed and implemented, will ensure in silico toxicological assessments are performed and evaluated in a consistent, reproducible, and well-documented manner across industries and regulatory bodies to support wider uptake and acceptance of the approaches. The development of IST protocols is an initiative developed through a collaboration among an international consortium to reflect the state-of-the-art in in silico toxicology for hazard identification and characterization. A general outline for describing the development of such protocols is included and it is based on in silico predictions and/or available experimental data for a defined series of relevant toxicological effects or mechanisms. The publication presents a novel approach for determining the reliability of in silico predictions alongside experimental data. In addition, we discuss how to determine the level of confidence in the assessment based on the relevance and reliability of the information

    Improving Drug Discovery Decision Making using Machine Learning and Graph Theory in QSAR Modeling

    Get PDF
    During the last decade non-linear machine-learning methods have gained popularity among QSAR modelers. The machine-learning algorithms generate highly accurate models at a cost of increased model complexity where simple interpretations, valid in the entire model domain, are rare. This thesis focuses on maximizing the amount of extracted knowledge from predictive QSAR models and data. This has been achieved by the development of a descriptor importance measure, a method for automated local optimization of compounds and a method for automated extraction of substructural alerts. Furthermore diïŹ€erent QSAR modeling strategies have been evaluated with respect to predictivity, risks and information content. To test hypotheses and theories large scale simulations of known relations between activities and de- scriptors have been conducted. With the simulations it has been possible to study properties of methods, risks, implementations and errors in a controlled manner since the correct answer has been known. Sim- ulation studies have been used in the development of the generally applicable descriptor importance measure and in the analysis of QSAR modeling strategies. The use of simulations is spread in many areas, but not that common in the computational chemistry community. The descriptor importance mea- sure developed can be applied to any machine-learning method and validations using both real data and simulated data show that the descriptor importance measure is very accurate for non-linear methods. An automated method for local optimization of compounds was developed to partly replace manual searches made to optimize compounds. The local optimization of compounds make use of the informa- tion in available data and deterministically enumerates new compounds in a space spanned close to the compound of interest. This can be used as a starting point for further compound optimization and aids the chemist in ïŹnding new compounds. An other approach to guide chemists in the process of optimiz- ing compounds is through substructural warnings. A fast method for signiïŹcant substructure extraction has been developed that extracts signiïŹcant substructures from data with respect to the activity of the compound. The method is at least on par with existing methods in terms of accuracy but is signiïŹcantly less time consuming. Non-linear machine-learning methods have opened up new possibilities for QSAR modeling that changes the way chemical data can be handled by model algorithms. Therefore properties of Local and Global QSAR modeling strategies have been studied. The results show that Local models come with high risks and are less accurate compared to Global models. In summary this thesis shows that Global QSAR modeling strategies should be applied preferably using methods that are able to handle non-linear relationships. The developed methods can be interpreted easily and an extensive amount of information can be retrieved. For the methods to become easily available to a broader group of users packaging with an open-source chemical platform is needed

    How to Predict the pK(a) of Any Compound in Any Solvent

    No full text
    Acid-base properties of molecules in nonaqueous solvents are of critical importance for almost all areas of chemistry. Despite this very high relevance, our knowledge is still mostly limited to the pK(a) of rather few compounds in the most common solvents, and a simple yet truly general computational procedure to predict pK(a)'s of any compound in any solvent is still missing. In this contribution, we describe such a procedure. Our method requires only the experimental pK(a) of a reference compound in water and a few standard quantum-chemical calculations. This method is tested through computing the proton solvation energy in 39 solvents and by comparing the pK(a) of 142 simple compounds in 12 solvents. Our computations indicate that the method to compute the proton solvation energy is robust with respect to the detailed computational setup and the construction of the solvation model. The unscaled pK(a)'s computed using an implicit solvation model on the other hand differ significantly from the experimental data. These differences are partly associated with the poor quality of the experimental data and the well-known shortcomings of implicit solvation models. General linear scaling relationships to correct this error are suggested for protic and aprotic media. Using these relationships, the deviations between experiment and computations drop to a level comparable to that observed in water, which highlights the efficiency of our method

    Predicting Aromatic Amine Mutagenicity with Confidence: A Case Study Using Conformal Prediction

    No full text
    The occurrence of mutagenicity in primary aromatic amines has been investigated using conformal prediction. The results of the investigation show that it is possible to develop mathematically proven valid models using conformal prediction and that the existence of uncertain classes of prediction, such as both (both classes assigned to a compound) and empty (no class assigned to a compound), provides the user with additional information on how to use, further develop, and possibly improve future models. The study also indicates that the use of different sets of fingerprints results in models, for which the ability to discriminate varies with respect to the set level of acceptable errors

    Rolling Cargo Management Using a Deep Reinforcement Learning Approach

    No full text
    Loading and unloading rolling cargo in roll-on/roll-off are important and very recurrent operations in maritime logistics. In this paper, we apply state-of-the-art deep reinforcement learning algorithms to automate these operations in a complex and real environment. The objective is to teach an autonomous tug master to manage rolling cargo and perform loading and unloading operations while avoiding collisions with static and dynamic obstacles along the way. The artificial intelligence agent, representing the tug master, is trained and evaluated in a challenging environment based on the Unity3D learning framework, called the ML-Agents, and using proximal policy optimization. The agent is equipped with sensors for obstacle detection and is provided with real-time feedback from the environment thanks to its own reward function, allowing it to dynamically adapt its policies and navigation strategy. The performance evaluation shows that by choosing appropriate hyperparameters, the agents can successfully learn all required operations including lane-following, obstacle avoidance, and rolling cargo placement. This study also demonstrates the potential of intelligent autonomous systems to improve the performance and service quality of maritime transport

    Machine Learning Strategies When Transitioning between Biological Assays

    No full text
    Machine learning is widely used in drug development to predict activity in biological assays based on chemical structure. However, the process of transitioning from one experimental setup to another for the same biological endpoint has not been extensively studied. In a retrospective study, we here explore different modeling strategies of how to combine data from the old and new assays when training conformal prediction models using data from hERG and Na-v assays. We suggest to continuously monitor the validity and efficiency of models as more data is accumulated from the new assay and select a modeling strategy based on these metrics. In order to maximize the utility of data from the old assay, we propose a strategy that augments the proper training set of an inductive conformal predictor by adding data from the old assay but only having data from the new assay in the calibration set, which results in valid (well-calibrated) models with improved efficiency compared to other strategies. We study the results for varying sizes of new and old assays, allowing for discussion of different practical scenarios. We also conclude that our proposed assay transition strategy is more beneficial, and the value of data from the new assay is higher, for the harder case of regression compared to classification problems
    • 

    corecore