16 research outputs found

    IN SILICO AND IN VIVO PHARMACOLOGICAL STUDIES OF CLOZAPINE AND D-AMINO ACID OXIDASE INHIBITOR FOR COGNITIVE ENHANCEMENT

    Get PDF
    Objective: D-amino acid oxidase inhibitors (DAAOIs) are of particular focus for cognition study. Atypical antipsychotics are known DAAO inhibitors. The present examination was done to check out the binding affinity of atypical antipsychotics by docking toward the DAAO protein; in conclusion, the picked antipsychotic drug was checked for their cognition enhancing activity with scopolamine-induced amnesia.Methods: The crystal structure of DAAO was obtained from Protein Data Bank, the energy minimization was performed with CHARMM program, then active site prediction was made out using Ramachandran plot, and finally, docking examination was finished using Autodock 4.2 tool. For in vivo study, the mice were divided into three groups. Group I - vehicle (Saline) treated, Group II – saline +scopolamine (1 mg/kg, intraperitoneal [i.p]) treated, and Group III - clozapine (20 mg/kg, i.p) + scopolamine (1 mg/kg, i.p).Results: The Autodock examination shows significant binding affinity of - 5.22 for brexpiprazole and least or positive binding affinity of +1 for iloperidone. Clozapine with binding energy of - 2.87 was decided for completing the in vivo cognition study. The in vivo shows up that clozapine (20 mg/kg, i.p) exhibits a change in the impairment of spatial memory.Conclusion: The results recommend that the clozapine produces cognitive enhancement through both DAAOI and antipsychotic action. Clozapine has cognitive improvement potential, favoring its usage in reducing toxic impacts of scopolamin

    Computational prediction and analysis of macromolecular interactions

    Full text link
    Protein interactions regulate gene expression, cell signaling, catalysis, and many other functions across all of molecular biology. We must understand them quantitatively, and experimental methods have provided the data that form the basis of our current understanding. They remain our most accurate tools. However, their low efficiency and high cost leave room for predictive, computational approaches that can provide faster and more detailed answers to biological problems. A rigid-body simulation can quickly and effectively calculate the predicted interaction energy between two molecular structures in proximity. The fast Fourier-transform-based mapping algorithm FTMap predicts small molecule binding 'hot spots' on a protein's surface and can provide likely orientations of specific ligands of interest that may occupy those hot spots. This process now allows unique ligands to be used by this algorithm while permitting additional small molecular cofactors to remain in their bound conformation. By keeping the cofactors bound, FTMap can reduce false positives where the algorithm identifies a true, but incorrect, ligand pocket where the known cofactor already binds. A related algorithm, ClusPro, can evaluate interaction energies for billions of docked conformations of macromolecular structures. The work reported in this thesis can predict protein-polysaccharide interactions and the software now contains a publicly available feature for predicting protein-heparin interactions. In addition, a new approach for determining regions of predicted activity on a protein's surface allows prediction of a protein-protein interface. This new tool can also identify the interface in encounter complexes formed by the process of protein association—more closely resembling the biological nature of the interaction than the former, calculated, binary, bound and unbound states

    Computational Modeling and Design of Protein–Protein Interactions

    Get PDF
    Protein–protein interactions dictate biological functions, including ones essential to living organisms such as immune response or transcriptional regulation. To fundamentally understand these biological processes, we must understand the underlying interactions at the atomic scale. However interactions are overly abundant and traditional structure determination methods cannot manage a comprehensive study. Alternatively, computational methods can provide structural models with high-throughput overcoming the challenge provided by the sheer breadth of interactions, albeit at the cost of accuracy. Thus, it is necessary to improve modeling techniques if these approaches will be used to rigorously study protein–protein interactions. In this dissertation, I describe my advances to protein–protein interaction modeling (docking) methods in Rosetta. My advances are based on challenges encountered in a blind docking competition, including: modeling camelid antibodies, modeling flexible protein regions, and modeling solvated interfaces. First, I detail improvements to RosettaAntibody and Rosetta SnugDock, including making the underlying code more robust and easy to use, enabling new loop modeling methods, developing an automatically updating database, and implementing scientific benchmarks. These improvements permitted me to conduct the largest-to-date study of antibody CDR-H3 loop flexibility, which showed that traditional, small-scale studies missed emergent properties. Then, I pivot from antibodies to focus on the modeling of disordered protein regions. I contributed advances to the FloppyTail protocol, including enabling the modeling of multiple disordered regions within a single protein and pioneering an ensemble-based analysis of resultant models. I modeled Hfq proteins across six species of bacteria and demonstrated experimentally-validated prediction of interactions between disordered and ordered protein regions. My simulations provided a hypothetical mechanism for Hfq function. Finally, I designed crystallographic protein–protein interactions, with the goal of improving protein crystal resolution. To approach this exceptional challenge, I first demonstrated that, under homogenous conditions, Rosetta scores can correlate with crystal resolution. Next, I computationally designed and experimentally characterized sixteen variants of a model protein. Only five crystallized, with one providing an improvement in resolution, showing that improvement through computational design is challenging, but possible. In sum, my work advanced our understanding and our ability to model and design several challenging protein–protein interactions

    Algorithmic and Technical Improvements for Next Generation Drug Design Software Tools

    Get PDF
    [eng] The pharmaceutical industry is actively looking for new ways of boosting the efficiency and effectiveness of their R&D programmes. The extensive use of computational modeling tools in the drug discovery pipeline (DDP) is having a positive impact on research performance, since in silico experiments are usually faster and cheaper that their real counterparts. The lead identification step is a very sensitive point in the DDP. In this context, Virtual high-throughput screening techniques (VHTS) work as a filtering mecha-nism that benefits the following stages by reducing the number of compounds to be tested experimentally. Unfortunately the simplifications applied in the VHTS docking software make them prone generate false positives and negatives. These errors spread across the rest of the DDP stages, and have a negative impact in terms of financial and time costs. In the Electronic and Atomic Protein Modelling group (Barcelona Supercomputing Center, Life Sciences department), we have developed the Protein Energy Landscape Exploration (PELE) software. PELE has demonstrated to be a good alternative to explore the conformational space of proteins and perform ligand-protein docking simulations. In this thesis we discuss how to turn PELE into a faster and more efficient tool by improving its technical and algorithmic features, so that it can be eventually used in VHTS protocols. Besides, we have addressed the difficulties of analyzing extensive data associated with massive simulation production. First, we have rewritten the software using C++ and modern software engineering techniques. As a consequence, our code base is now well organized and tested. PELE has become a piece of software which is easier to modify, understand, and extend. It is also more robust and reliable. The rewriting the code has helped us to overcome some of its previous technical limitations, such as the restrictions on the size of the systems. Also, it has allowed us to extend PELE with new solvent models, force fields, and types of biomolecules. Moreover, the rewriting has make it possible to adapt the code in order to take advantage of new parallel architectures and accelerators obtaining promising speedup results. Second, we have improved the way PELE handles protein flexibility by im-plemented and internal coordinate Normal Mode Analysis (icNMA) method. This method is able to produce more energy favorable perturbations than the current Anisotropic Network Model (ANM) based strategy. This has allowed us to eliminate the unneeded relaxation phase of PELE. As a consequence, the overall computational performance of the sampling is significantly improved (-5-7x). The new internal coordinates-based methodology is able to capture the flexibility of the backbone better than the old method and is in closer agreement to molecular dynamics than the ANM-based method

    Constrained optimization applied to multiscale integrative modeling

    Get PDF
    Multiscale integrative modeling stands at the intersection between experimental and computational techniques to predict the atomistic structures of important macromolecules. In the integrative modeling process, the experimental information is often integrated with energy potential and macromolecular substructures in order to derive realistic structural models. This heterogeneous information is often combined into a global objective function that quantifies the quality of the structural models and that is minimized through optimization. In order to balance the contribution of the relative terms concurring to the global function, weight constants are assigned to each term through a computationally demanding process. In order to alleviate this common issue, we suggest to switch from the traditional paradigm of using a single unconstrained global objective function to a constrained optimization scheme. The work presented in this thesis describes the different applications and methods associated with the development of a general constrained optimization protocol for multiscale integrative modeling. The initial implementation concerned the prediction of symmetric macromolecular assemblies throught the incorporation of a recent efficient constrained optimizer nicknamed mViE (memetic Viability Evolution) to our integrative modeling protocol power (parallel optimization workbench to enhance resolution). We tested this new approach through rigorous comparisons against other state-of-the-art integrative modeling methods on a benchmark set of solved symmetric macromolecular assemblies. In this process, we validated the robustness of the constrained optimization method by obtaining native-like structural models. This constrained optimization protocol was then applied to predict the structure of the elusive human Huntingtin protein. Due to the fact that little structural information was available when the project was initiated, we integrated information from secondary structure prediction and low-resolution experiments, in the form of cryo-electron microscopy maps and crosslinking mass spectrometry data, in order to derive a structural model of Huntingtin. The structure resulting from such integrative modeling approach was used to derive dynamic information about Huntingtin protein. At a finer level of resolution, the constrained optimization protocol was then applied to dock small molecules inside the binding site of protein targets. We converted the classical molecular docking problem from an unconstrained single objective optimization to a constrained one by extracting local and global constraints from pre-computed energy grids. The new approach was tested and validated on standard ligand-receptor benchmark sets widely used by the molecular docking community, and showed comparable results to state-of-the-art molecular docking programs. Altogether, the work presented in this thesis proposed improvements in the field of multiscale integrative modeling which are reflected both in the quality of the models returned by the new constrained optimization protocol and in the simpler way of treating the uncorrelated terms concurring to the global scoring scheme to estimate the quality of the models

    Computational studies of biomolecules

    Get PDF
    In modern drug discovery, lead discovery is a term used to describe the overall process from hit discovery to lead optimisation, with the goal being to identify drug candidates. This can be greatly facilitated by the use of computer-aided (or in silico) techniques, which can reduce experimentation costs along the drug discovery pipeline. The range of relevant techniques include: molecular modelling to obtain structural information, molecular dynamics (which will be covered in Chapter 2), activity or property prediction by means of quantitative structure activity/property models (QSAR/QSPR), where machine learning techniques are introduced (to be covered in Chapter 1) and quantum chemistry, used to explain chemical structure, properties and reactivity. This thesis is divided into five parts. Chapter 1 starts with an outline of the early stages of drug discovery; introducing the use of virtual screening for hit and lead identification. Such approaches may roughly be divided into structure-based (docking, by far the most often referred to) and ligand-based, leading to a set of promising compounds for further evaluation. Then, the use of machine learning techniques, the issue of which will be frequently encountered, followed by a brief review of the "no free lunch" theorem, that describes how no learning algorithm can perform optimally on all problems. This implies that validation of predictive accuracy in multiple models is required for optimal model selection. As the dimensionality of the feature space increases, the issue referred to as "the curse of dimensionality" becomes a challenge. In closing, the last sections focus on supervised classification Random Forests. Computer-based analyses are an integral part of drug discovery. Chapter 2 begins with discussions of molecular docking; including strategies incorporating protein flexibility at global and local levels, then a specific focus on an automated docking program – AutoDock, which uses a Lamarckian genetic algorithm and empirical binding free energy function. In the second part of the chapter, a brief introduction of molecular dynamics will be given. Chapter 3 describes how we constructed a dataset of known binding sites with co-crystallised ligands, used to extract features characterising the structural and chemical properties of the binding pocket. A machine learning algorithm was adopted to create a three-way predictive model, capable of assigning each case to one of the classes (regular, orthosteric and allosteric) for in silico selection of allosteric sites, and by a feature selection algorithm (Gini) to rationalize the selection of important descriptors, most influential in classifying the binding pockets. In Chapter 4, we made use of structure-based virtual screening, and we focused on docking a fluorescent sensor to a non-canonical DNA quadruplex structure. The preferred binding poses, binding site, and the interactions are scored, followed by application of an ONIOM model to re-score the binding poses of some DNA-ligand complexes, focusing on only the best pose (with the lowest binding energy) from AutoDock. The use of a pre-generated conformational ensemble using MD to account for the receptors' flexibility followed by docking methods are termed “relaxed complex” schemes. Chapter 5 concerns the BLUF domain photocycle. We will be focused on conformational preference of some critical residues in the flavin binding site after a charge redistribution has been introduced. This work provides another activation model to address controversial features of the BLUF domain

    Statistical approaches to the study of protein folding and energetics

    Get PDF
    The determination of protein structure and the exploration of protein folding landscapes are two of the key problems in computational biology. In order to address these challenges, both a protein model that accurately captures the physics of interest and an efficient sampling algorithm are required. The first part of this thesis documents the continued development of CRANKITE, a coarse-grained protein model, and its energy landscape exploration using nested sampling, a Bayesian sampling algorithm. We extend CRANKITE and optimize its parameters using a maximum likelihood approach. The efficiency of our procedure, using the contrastive divergence approximation, allows a large training set to be used, producing a model which is transferable to proteins not included in the training set. We develop an empirical Bayes model for the prediction of protein β-contacts, which are required inputs for CRANKITE. Our approach couples the constraints and prior knowledge associated with β-contacts to a maximum entropy-based statistic which predicts evolutionarily-related contacts. Nested sampling (NS) is a Bayesian algorithm shown to be efficient at sampling systems which exhibit a first-order phase transition. In this work we parallelize the algorithm and, for the first time, apply it to a biophysical system: small globular proteins modelled using CRANKITE. We generate energy landscape charts, which give a large-scale visualization of the protein folding landscape, and we compare the efficiency of NS to an alternative sampling technique, parallel tempering, when calculating the heat capacity of a short peptide. In the final part of the thesis we adapt the NS algorithm for use within a molecular dynamics framework and demonstrate the application of the algorithm by calculating the thermodynamics of allatom models of a small peptide, comparing results to the standard replica exchange approach. This adaptation will allow NS to be used with more realistic force fields in the future
    corecore