8 research outputs found

    Numeric simulation and knowledge-oriented approach for the discovery of new therapeutic molecules

    No full text
    L’innovation thérapeutique progresse traditionnellement par la combinaison du criblage expérimental et de la modélisation moléculaire. En pratique, cette dernière approche est souvent limitée par la pénurie de données expérimentales, particulièrement les informations structurales et biologiques. Aujourd'hui, la situation a complètement changé avec le séquençage à haut débit du génome humain et les avancées réalisées dans la détermination des structures tridimensionnelles des protéines. Cette détermination permet d’avoir accès à une grande quantité de données pouvant servir à la recherche de nouveaux traitements pour un grand nombre de maladies. À cet égard, les approches informatiques permettant de développer des programmes de criblage virtuel à haut débit offrent une alternative ou un complément aux méthodes expérimentales qui font gagner du temps et de l’argent dans la découverte de nouveaux traitements.Cependant, la plupart de ces approches souffrent des mêmes limitations. Le coût et la durée des temps de calcul pour évaluer la fixation d'une collection de molécules à une cible, qui est considérable dans le contexte du haut débit, ainsi que la précision des résultats obtenus sont les défis les plus évidents dans le domaine. Le besoin de gérer une grande quantité de données hétérogènes est aussi particulièrement crucial.Pour surmonter les limitations actuelles du criblage virtuel à haut débit et ainsi optimiser les premières étapes du processus de découverte de nouveaux médicaments, j’ai mis en place une méthodologie innovante permettant, d’une part, de gérer une masse importante de données hétérogènes et d’en extraire des connaissances et, d’autre part, de distribuer les calculs nécessaires sur les grilles de calcul comportant plusieurs milliers de processeurs, le tout intégré à un protocole de criblage virtuel en plusieurs étapes. L’objectif est la prise en compte, sous forme de contraintes, des connaissances sur le problème posé afin d’optimiser la précision des résultats et les coûts en termes de temps et d’argent du criblage virtuelTherapeutic innovation has traditionally benefited from the combination of experimental screening and molecular modelling. In practice, however, the latter is often limited by the shortage of structural and biological information. Today, the situation has completely changed with the high-throughput sequencing of the human genome, and the advances realized in the three-dimensional determination of the structures of proteins. This gives access to an enormous amount of data which can be used to search for new treatments for a large number of diseases. In this respect, computational approaches have been used for high-throughput virtual screening (HTVS) and offer an alternative or a complement to the experimental methods, which allow more time for the discovery of new treatments.However, most of these approaches suffer the same limitations. One of these is the cost and the computing time required for estimating the binding of all the molecules from a large data bank to a target, which can be considerable in the context of the high-throughput. Also, the accuracy of the results obtained is another very evident challenge in the domain. The need to manage a large amount of heterogeneous data is also particularly crucial.To try to surmount the current limitations of HTVS and to optimize the first stages of the drug discovery process, I set up an innovative methodology presenting two advantages. Firstly, it allows to manage an important mass of heterogeneous data and to extract knowledge from it. Secondly, it allows distributing the necessary calculations on a grid computing platform that contains several thousand of processors. The whole methodology is integrated into a multiple-step virtual screening funnel. The purpose is the consideration, in the form of constraints, of the knowledge available about the problem posed in order to optimize the accuracy of the results and the costs in terms of time and money at various stages of high-throughput virtual screenin

    Simulation numérique et approche orientée connaissance pour la découverte de nouvelles molécules thérapeutiques

    No full text
    L innovation thérapeutique progresse traditionnellement par la combinaison du criblage expérimental et de la modélisation moléculaire. En pratique, cette dernière approche est souvent limitée par la pénurie de données expérimentales, particulièrement les informations structurales et biologiques. Aujourd'hui, la situation a complètement changé avec le séquençage à haut débit du génome humain et les avancées réalisées dans la détermination des structures tridimensionnelles des protéines. Cette détermination permet d avoir accès à une grande quantité de données pouvant servir à la recherche de nouveaux traitements pour un grand nombre de maladies. À cet égard, les approches informatiques permettant de développer des programmes de criblage virtuel à haut débit offrent une alternative ou un complément aux méthodes expérimentales qui font gagner du temps et de l argent dans la découverte de nouveaux traitements.Cependant, la plupart de ces approches souffrent des mêmes limitations. Le coût et la durée des temps de calcul pour évaluer la fixation d'une collection de molécules à une cible, qui est considérable dans le contexte du haut débit, ainsi que la précision des résultats obtenus sont les défis les plus évidents dans le domaine. Le besoin de gérer une grande quantité de données hétérogènes est aussi particulièrement crucial.Pour surmonter les limitations actuelles du criblage virtuel à haut débit et ainsi optimiser les premières étapes du processus de découverte de nouveaux médicaments, j ai mis en place une méthodologie innovante permettant, d une part, de gérer une masse importante de données hétérogènes et d en extraire des connaissances et, d autre part, de distribuer les calculs nécessaires sur les grilles de calcul comportant plusieurs milliers de processeurs, le tout intégré à un protocole de criblage virtuel en plusieurs étapes. L objectif est la prise en compte, sous forme de contraintes, des connaissances sur le problème posé afin d optimiser la précision des résultats et les coûts en termes de temps et d argent du criblage virtuelTherapeutic innovation has traditionally benefited from the combination of experimental screening and molecular modelling. In practice, however, the latter is often limited by the shortage of structural and biological information. Today, the situation has completely changed with the high-throughput sequencing of the human genome, and the advances realized in the three-dimensional determination of the structures of proteins. This gives access to an enormous amount of data which can be used to search for new treatments for a large number of diseases. In this respect, computational approaches have been used for high-throughput virtual screening (HTVS) and offer an alternative or a complement to the experimental methods, which allow more time for the discovery of new treatments.However, most of these approaches suffer the same limitations. One of these is the cost and the computing time required for estimating the binding of all the molecules from a large data bank to a target, which can be considerable in the context of the high-throughput. Also, the accuracy of the results obtained is another very evident challenge in the domain. The need to manage a large amount of heterogeneous data is also particularly crucial.To try to surmount the current limitations of HTVS and to optimize the first stages of the drug discovery process, I set up an innovative methodology presenting two advantages. Firstly, it allows to manage an important mass of heterogeneous data and to extract knowledge from it. Secondly, it allows distributing the necessary calculations on a grid computing platform that contains several thousand of processors. The whole methodology is integrated into a multiple-step virtual screening funnel. The purpose is the consideration, in the form of constraints, of the knowledge available about the problem posed in order to optimize the accuracy of the results and the costs in terms of time and money at various stages of high-throughput virtual screeningNANCY1-Bib. numérique (543959902) / SudocSudocFranceF

    A KDD Approach for Designing Filtering Strategies to Improve Virtual Screening

    No full text
    International audienceVirtual screening has become an essential step in the early drug discovery process. Generally speaking, it consists in using computational techniques for selecting compounds from chemical libraries in order to identify drug-like molecules acting on a biological target of therapeutic interest. In the present study we consider virtual screening as a particular form of the KDD (Knowledge Discovery from Databases) approach. The knowledge to be discovered concerns the way a compound can be considered as a consistent ligand for a given target. The data from which this knowledge has to be discovered derive from diverse sources such as chemical, structural, and biological data related to ligands and their cognate targets. More precisely, we aim to extract filters from chemical libraries and protein-ligand interactions. In this context, the three basic steps of a KDD process have to be implemented. Firstly, a model-driven data integration step is applied to appropriate heterogeneous data found in public databases. This facilitates subsequent extraction of various datasets for mining. In particular and for specific ligand descriptors, it allows transforming a multiple-instance problem into a single-instance one. In a second step, mining algorithms are applied to the datasets and finally the most accurate knowledge units are eventually proposed as new filters. We report here this KDD approach and the experimental results we obtained with a set of ligands of the hormone receptor LXR

    Comparison of Three Preprocessing Filters Efficiency in Virtual Screening: Identification of New Putative LXRβ Regulators As a Test Case

    No full text
    In silico screening methodologies are widely recognized as efficient approaches in early steps of drug discovery. However, in the virtual high-throughput screening (VHTS) context, where hit compounds are searched among millions of candidates, three-dimensional comparison techniques and knowledge discovery from databases should offer a better efficiency to finding novel drug leads than those of computationally expensive molecular dockings. Therefore, the present study aims at developing a filtering methodology to efficiently eliminate unsuitable compounds in VHTS process. Several filters are evaluated in this paper. The first two are structure-based and rely on either geometrical docking or pharmacophore depiction. The third filter is ligand-based and uses knowledge-based and fingerprint similarity techniques. These filtering methods were tested with the Liver X Receptor (LXR) as a target of therapeutic interest, as LXR is a key regulator in maintaining cholesterol homeostasis. The results show that the three considered filters are complementary so that their combination should generate consistent compound lists of potential hits

    Multiple-step virtual screening using VSM-G: overview and validation of fast geometrical matching enrichment.

    No full text
    International audienceNumerous methods are available for use as part of a virtual screening strategy but, as yet, no single method is able to guarantee both a level of confidence comparable to experimental screening and a level of computing efficiency that could drastically cut the costs of early phase drug discovery campaigns. Here, we present VSM-G (virtual screening manager for computational grids), a virtual screening platform that combines several structure-based drug design tools. VSM-G aims to be as user-friendly as possible while retaining enough flexibility to accommodate other in silico techniques as they are developed. In order to illustrate VSM-G concepts, we present a proof-of-concept study of a fast geometrical matching method based on spherical harmonics expansions surfaces. This technique is implemented in VSM-G as the first module of a multiple-step sequence tailored for high-throughput experiments. We show that, using this protocol, notable enrichment of the input molecular database can be achieved against a specific target, here the liver-X nuclear receptor. The benefits, limitations and applicability of the VSM-G approach are discussed. Possible improvements of both the geometrical matching technique and its implementation within VSM-G are suggested

    Adenosine analogs bearing phosphate isosteres as human MDO1 ligands

    No full text
    Abstract The human O-acetyl-ADP-ribose deacetylase MDO1 is a mono-ADP-ribosylhydrolase involved in the reversal of post-translational modifications. Until now MDO1 has been poorly characterized, partly since no ligand is known besides adenosine nucleotides. Here, we synthesized thirteen compounds retaining the adenosine moiety and bearing bioisosteric replacements of the phosphate at the ribose 5′-oxygen. These compounds are composed of either a squaryldiamide or an amide group as the bioisosteric replacement and/or as a linker. To these groups a variety of substituents were attached such as phenyl, benzyl, pyridyl, carboxyl, hydroxy and tetrazolyl. Biochemical evaluation showed that two compounds, one from both series, inhibited ADP-ribosyl hydrolysis mediated by MDO1 in high concentrations
    corecore