728 research outputs found

    Deciphering the folding kinetics of transmembrane helical proteins

    Full text link
    Nearly a quarter of genomic sequences and almost half of all receptors that are likely to be targets for drug design are integral membrane proteins. Understanding the detailed mechanisms of the folding of membrane proteins is a largely unsolved, key problem in structural biology. Here, we introduce a general model and use computer simulations to study the equilibrium properties and the folding kinetics of a CαC_{\alpha}-based two helix bundle fragment (comprised of 66 amino-acids) of Bacteriorhodopsin. Various intermediates are identified and their free energy are calculated toghether with the free energy barrier between them. In 40% of folding trajectories, the folding rate is considerably increased by the presence of non-obligatory intermediates acting as traps. In all cases, a substantial portion of the helices is rapidly formed. This initial stage is followed by a long period of consolidation of the helices accompanied by their correct packing within the membrane. Our results provide the framework for understanding the variety of folding pathways of helical transmembrane proteins

    Bayesian nonparametric multivariate convex regression

    Full text link
    In many applications, such as economics, operations research and reinforcement learning, one often needs to estimate a multivariate regression function f subject to a convexity constraint. For example, in sequential decision processes the value of a state under optimal subsequent decisions may be known to be convex or concave. We propose a new Bayesian nonparametric multivariate approach based on characterizing the unknown regression function as the max of a random collection of unknown hyperplanes. This specification induces a prior with large support in a Kullback-Leibler sense on the space of convex functions, while also leading to strong posterior consistency. Although we assume that f is defined over R^p, we show that this model has a convergence rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually maps a d dimensional linear subspace to R. We design an efficient reversible jump MCMC algorithm for posterior computation and demonstrate the methods through application to value function approximation

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)

    Optimization Techniques for Energy Minimization Problem in a Marked Point Process Application to Forestry

    Get PDF
    We use marked point processes to detect an unknown number of trees from high resolution aerial images. This approach turns to be an energy minimization problem, where the energy contains a prior term which takes into account the geometrical properties of the objects, and a data term to match these objects onto the image. This stochastic process is simulated via a Reversible Jump Markov Chain Monte Carlo procedure, which embeds a Simulated Annealing scheme to extract the best configuration of objects. We compare in this paper different cooling schedules of the Simulated Annealing algorithm which could provide some good minimization in a short time. We also study some adaptive proposition kernels
    corecore