1,934 research outputs found

    Swimming into peptidomimetic chemical space using pepMMsMIMIC

    Get PDF
    pepMMsMIMIC is a novel web-oriented peptidomimetic compound virtual screening tool based on a multi-conformers three-dimensional (3D)-similarity search strategy. Key to the development of pepMMsMIMIC has been the creation of a library of 17 million conformers calculated from 3.9 million commercially available chemicals collected in the MMsINC¼ database. Using as input the 3D structure of a peptide bound to a protein, pepMMsMIMIC suggests which chemical structures are able to mimic the protein–protein recognition of this natural peptide using both pharmacophore and shape similarity techniques. We hope that the accessibility of pepMMsMIMIC (freely available at http://mms.dsfarm.unipd.it/pepMMsMIMIC) will encourage medicinal chemists to de-peptidize protein–protein recognition processes of biological interest, thus increasing the potential of in silico peptidomimetic compound screening of known small molecules to expedite drug development

    Scope of 3D shape-based approaches in predicting the macromolecular targets of structurally complex small molecules including natural products and macrocyclic ligands

    Get PDF
    A plethora of similarity-based, network-based, machine learning, docking and hybrid approaches for predicting the macromolecular targets of small molecules are available today and recognized as valuable tools for providing guidance in early drug discovery. With the increasing maturity of target prediction methods, researchers have started to explore ways to expand their scope to more challenging molecules such as structurally complex natural products and macrocyclic small molecules. In this work, we systematically explore the capacity of an alignment-based approach to identify the targets of structurally complex small molecules (including large and flexible natural products and macrocyclic compounds) based on the similarity of their 3D molecular shape to noncomplex molecules (i.e., more conventional, “drug-like”, synthetic compounds). For this analysis, query sets of 10 representative, structurally complex molecules were compiled for each of the 28 pharmaceutically relevant proteins. Subsequently, ROCS, a leading shape-based screening engine, was utilized to generate rank-ordered lists of the potential targets of the 28 × 10 queries according to the similarity of their 3D molecular shapes with those of compounds from a knowledge base of 272 640 noncomplex small molecules active on a total of 3642 different proteins. Four of the scores implemented in ROCS were explored for target ranking, with the TanimotoCombo score consistently outperforming all others. The score successfully recovered the targets of 30% and 41% of the 280 queries among the top-5 and top-20 positions, respectively. For 24 out of the 28 investigated targets (86%), the method correctly assigned the first rank (out of 3642) to the target of interest for at least one of the 10 queries. The shape-based target prediction approach showed remarkable robustness, with good success rates obtained even for compounds that are clearly distinct from any of the ligands present in the knowledge base. However, complex natural products and macrocyclic compounds proved to be challenging even with this approach, although cases of complete failure were recorded only for a small number of targets.publishedVersio

    In silico Strategies to Support Fragment-to-Lead Optimization in Drug Discovery.

    Get PDF
    Fragment-based drug (or lead) discovery (FBDD or FBLD) has developed in the last two decades to become a successful key technology in the pharmaceutical industry for early stage drug discovery and development. The FBDD strategy consists of screening low molecular weight compounds against macromolecular targets (usually proteins) of clinical relevance. These small molecular fragments can bind at one or more sites on the target and act as starting points for the development of lead compounds. In developing the fragments attractive features that can translate into compounds with favorable physical, pharmacokinetics and toxicity (ADMET-absorption, distribution, metabolism, excretion, and toxicity) properties can be integrated. Structure-enabled fragment screening campaigns use a combination of screening by a range of biophysical techniques, such as differential scanning fluorimetry, surface plasmon resonance, and thermophoresis, followed by structural characterization of fragment binding using NMR or X-ray crystallography. Structural characterization is also used in subsequent analysis for growing fragments of selected screening hits. The latest iteration of the FBDD workflow employs a high-throughput methodology of massively parallel screening by X-ray crystallography of individually soaked fragments. In this review we will outline the FBDD strategies and explore a variety of in silico approaches to support the follow-up fragment-to-lead optimization of either: growing, linking, and merging. These fragment expansion strategies include hot spot analysis, druggability prediction, SAR (structure-activity relationships) by catalog methods, application of machine learning/deep learning models for virtual screening and several de novo design methods for proposing synthesizable new compounds. Finally, we will highlight recent case studies in fragment-based drug discovery where in silico methods have successfully contributed to the development of lead compounds

    Decision-Making Amplification Under Uncertainty: An Exploratory Study of Behavioral Similarity and Intelligent Decision Support Systems

    Get PDF
    Intelligent decision systems have the potential to support and greatly amplify human decision-making across a number of industries and domains. However, despite the rapid improvement in the underlying capabilities of these “intelligent” systems, increasing their acceptance as decision aids in industry has remained a formidable challenge. If intelligent systems are to be successful, and their full impact on decision-making performance realized, a greater understanding of the factors that influence recommendation acceptance from intelligent machines is needed. Through an empirical experiment in the financial services industry, this study investigated the effects of perceived behavioral similarity (similarity state) on the dependent variables of recommendation acceptance, decision performance and decision efficiency under varying conditions of uncertainty (volatility state). It is hypothesized in this study that behavioral similarity as a design element will positively influence the acceptance rate of machine recommendations by human users. The level of uncertainty in the decision context is expected to moderate this relationship. In addition, an increase in recommendation acceptance should positively influence both decision performance and decision efficiency. The quantitative exploration of behavioral similarity as a design element revealed a number of key findings. Most importantly, behavioral similarity was found to positively influence the acceptance rate of machine recommendations. However, uncertainty did not moderate the level of recommendation acceptance as expected. The experiment also revealed that behavioral similarity positively influenced decision performance during periods of elevated uncertainty. This relationship was moderated based on the level of uncertainty in the decision context. The investigation of decision efficiency also revealed a statistically significant result. However, the results for decision efficiency were in the opposite direction of the hypothesized relationship. Interestingly, decisions made with the behaviorally similar decision aid were less efficient, based on length of time to make a decision, compared to decisions made with the low-similarity decision aid. The results of decision efficiency were stable across both levels of uncertainty in the decision context

    DeepGraphMol, a multi-objective, computational strategy for generating molecules with desirable properties: a graph convolution and reinforcement learning approach

    Get PDF
    Abstract We address the problem of generating novel molecules with desired interaction properties as a multi-objective optimization problem. Interaction binding models are learned from binding data using graph convolution networks (GCNs). Since the experimentally obtained property scores are recognised as having potentially gross errors, we adopted a robust loss for the model. Combinations of these terms, including drug likeness and synthetic accessibility, are then optimized using reinforcement learning based on a graph convolution policy approach. Some of the molecules generated, while legitimate chemically, can have excellent drug-likeness scores but appear unusual. We provide an example based on the binding potency of small molecules to dopamine transporters. We extend our method successfully to use a multi-objective reward function, in this case for generating novel molecules that bind with dopamine transporters but not with those for norepinephrine. Our method should be generally applicable to the generation in silico of molecules with desirable properties

    Recent advances in in silico target fishing

    Get PDF
    In silico target fishing, whose aim is to identify possible protein targets for a query molecule, is an emerging approach used in drug discovery due its wide variety of applications. This strategy allows the clarification of mechanism of action and biological activities of compounds whose target is still unknown. Moreover, target fishing can be employed for the identification of off targets of drug candidates, thus recognizing and preventing their possible adverse effects. For these reasons, target fishing has increasingly become a key approach for polypharmacology, drug repurposing, and the identification of new drug targets. While experimental target fishing can be lengthy and difficult to implement, due to the plethora of interactions that may occur for a single small-molecule with different protein targets, an in silico approach can be quicker, less expensive, more efficient for specific protein structures, and thus easier to employ. Moreover, the possibility to use it in combination with docking and virtual screening studies, as well as the increasing number of web-based tools that have been recently developed, make target fishing a more appealing method for drug discovery. It is especially worth underlining the increasing implementation of machine learning in this field, both as a main target fishing approach and as a further development of already applied strategies. This review reports on the main in silico target fishing strategies, belonging to both ligand-based and receptor-based approaches, developed and applied in the last years, with a particular attention to the different web tools freely accessible by the scientific community for performing target fishing studies

    Salford postgraduate annual research conference (SPARC) 2012 proceedings

    Get PDF
    These proceedings bring together a selection of papers from the 2012 Salford Postgraduate Annual Research Conference (SPARC). They reflect the breadth and diversity of research interests showcased at the conference, at which over 130 researchers from Salford, the North West and other UK universities presented their work. 21 papers are collated here from the humanities, arts, social sciences, health, engineering, environment and life sciences, built environment and business

    The Application of Spectral Clustering in Drug Discovery

    Get PDF
    The application of clustering algorithms to chemical datasets is well established and has been reviewed extensively. Recently, a number of ‘modern’ clustering algorithms have been reported in other fields. One example is spectral clustering, which has yielded promising results in areas such as protein library analysis. The term spectral clustering is used to describe any clustering algorithm that utilises the eigenpairs of a matrix as the basis for partitioning a dataset. This thesis describes the development and optimisation of a non-overlapping spectral clustering method that is based upon a study by Brewer. The initial version of the spectral clustering algorithm was closely related to Brewer’s method and used a full matrix diagonalisation procedure to identify the eigenpairs of an input matrix. This spectral clustering method was compared to the k-means and Ward’s algorithms, producing encouraging results, for example, when coupled with extended connectivity fingerprints, this method outperformed the other clustering algorithms according to the QCI measure. Although the spectral clustering algorithm showed promising results, its operational costs restricted its application to small datasets. Hence, the method was optimised in successive studies. Firstly, the effect of matrix sparsity on the spectral clustering was examined and showed that spectral clustering with sparse input matrices can lead to an improvement in the results. Despite this improvement, the costs of spectral clustering remained prohibitive, so the full matrix diagonalisation procedure was replaced with the Lanczos algorithm that has lower associated costs, as suggested by Brewer. This method led to a significant decrease in the computational costs when identifying a small number of clusters, however a number of issues remained; leading to the adoption of a SVD-based eigendecomposition method. The SVD-based algorithm was shown to be highly efficient, accurate and scalable through a number of studies
    • 

    corecore