12,142 research outputs found

    Computational protein design with backbone plasticity

    Get PDF
    The computational algorithms used in the design of artificial proteins have become increasingly sophisticated in recent years, producing a series of remarkable successes. The most dramatic of these is the de novo design of artificial enzymes. The majority of these designs have reused naturally occurring protein structures as “scaffolds” onto which novel functionality can be grafted without having to redesign the backbone structure. The incorporation of backbone flexibility into protein design is a much more computationally challenging problem due to the greatly increase search space but promises to remove the limitations of reusing natural protein scaffolds. In this review, we outline the principles of computational protein design methods and discuss recent efforts to consider backbone plasticity in the design process

    A gradient-directed Monte Carlo approach to molecular design

    Full text link
    The recently developed linear combination of atomic potentials (LCAP) approach [M.Wang et al., J. Am. Chem. Soc., 128, 3228 (2006)] allows continuous optimization in discrete chemical space and thus is quite useful in the design of molecules for targeted properties. To address further challenges arising from the rugged, continuous property surfaces in the LCAP approach, we develop a gradient-directed Monte Carlo (GDMC) strategy as an augmentation to the original LCAP optimization method. The GDMC method retains the power of exploring molecular space by utilizing local gradient information computed from the LCAP approach to jump between discrete molecular structures. It also allows random Monte Carlo moves to overcome barriers between local optima on property surfaces. The combined GDMC and LCAP approach is demonstrated here for optimizing nonlinear optical (NLO) properties in a class of donor-acceptor substituted benzene and porphyrin frameworks. Specifically, one molecule with four nitrogen atoms in the porphyrin ring was found to have a larger first hyperpolarizability than structures with the conventional porphyrin motif. 1Comment: 26 pages, 10 figure

    Practical Bayesian Optimization of Machine Learning Algorithms

    Full text link
    Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks
    • …
    corecore