1,011,557 research outputs found
Enhanced Sampling in the Well-Tempered Ensemble
We introduce the well-tempered ensemble (WTE) which is the biased ensemble
sampled by well-tempered metadynamics when the energy is used as collective
variable. WTE can be designed so as to have approximately the same average
energy as the canonical ensemble but much larger fluctuations. These two
properties lead to an extremely fast exploration of phase space. An even
greater efficiency is obtained when WTE is combined with parallel tempering.
Unbiased Boltzmann averages are computed on the fly by a recently developed
reweighting method [M. Bonomi et al. J. Comput. Chem. 30, 1615 (2009)]. We
apply WTE and its parallel tempering variant to the 2d Ising model and to a
Go-model of HIV protease, demonstrating in these two representative cases that
convergence is accelerated by orders of magnitude.Comment: 7 pages, 5 figure
Unfolding Hidden Barriers by Active Enhanced Sampling
Collective variable (CV) or order parameter based enhanced sampling
algorithms have achieved great success due to their ability to efficiently
explore the rough potential energy landscapes of complex systems. However, the
degeneracy of microscopic configurations, originating from the orthogonal space
perpendicular to the CVs, is likely to shadow "hidden barriers" and greatly
reduce the efficiency of CV-based sampling. Here we demonstrate that systematic
machine learning CV, through enhanced sampling, can iteratively lift such
degeneracies on the fly. We introduce an active learning scheme that consists
of a parametric CV learner based on deep neural network and a CV-based enhanced
sampler. Our active enhanced sampling (AES) algorithm is capable of identifying
the least informative regions based on a historical sample, forming a positive
feedback loop between the CV learner and sampler. This approach is able to
globally preserve kinetic characteristics by incrementally enhancing both
sample completeness and CV quality.Comment: 5 pages, 3 figure
Reweighted Autoencoded Variational Bayes for Enhanced Sampling (RAVE)
Here we propose the Reweighted Autoencoded Variational Bayes for Enhanced
Sampling (RAVE) method, a new iterative scheme that uses the deep learning
framework of variational autoencoders to enhance sampling in molecular
simulations. RAVE involves iterations between molecular simulations and deep
learning in order to produce an increasingly accurate probability distribution
along a low-dimensional latent space that captures the key features of the
molecular simulation trajectory. Using the Kullback-Leibler divergence between
this latent space distribution and the distribution of various trial reaction
coordinates sampled from the molecular simulation, RAVE determines an optimum,
yet nonetheless physically interpretable, reaction coordinate and optimum
probability distribution. Both then directly serve as the biasing protocol for
a new biased simulation, which is once again fed into the deep learning module
with appropriate weights accounting for the bias, the procedure continuing
until estimates of desirable thermodynamic observables are converged. Unlike
recent methods using deep learning for enhanced sampling purposes, RAVE stands
out in that (a) it naturally produces a physically interpretable reaction
coordinate, (b) is independent of existing enhanced sampling protocols to
enhance the fluctuations along the latent space identified via deep learning,
and (c) it provides the ability to easily filter out spurious solutions learned
by the deep learning procedure. The usefulness and reliability of RAVE is
demonstrated by applying it to model potentials of increasing complexity,
including computation of the binding free energy profile for a hydrophobic
ligand-substrate system in explicit water with dissociation time of more than
three minutes, in computer time at least twenty times less than that needed for
umbrella sampling or metadynamics
Transferable neural networks for enhanced sampling of protein dynamics
Variational auto-encoder frameworks have demonstrated success in reducing
complex nonlinear dynamics in molecular simulation to a single non-linear
embedding. In this work, we illustrate how this non-linear latent embedding can
be used as a collective variable for enhanced sampling, and present a simple
modification that allows us to rapidly perform sampling in multiple related
systems. We first demonstrate our method is able to describe the effects of
force field changes in capped alanine dipeptide after learning a model using
AMBER99. We further provide a simple extension to variational dynamics encoders
that allows the model to be trained in a more efficient manner on larger
systems by encoding the outputs of a linear transformation using time-structure
based independent component analysis (tICA). Using this technique, we show how
such a model trained for one protein, the WW domain, can efficiently be
transferred to perform enhanced sampling on a related mutant protein, the GTT
mutation. This method shows promise for its ability to rapidly sample related
systems using a single transferable collective variable and is generally
applicable to sets of related simulations, enabling us to probe the effects of
variation in increasingly large systems of biophysical interest.Comment: 20 pages, 10 figure
Enhanced sampling of multidimensional free-energy landscapes using adaptive biasing forces
We propose an adaptive biasing algorithm aimed at enhancing the sampling of
multimodal measures by Langevin dynamics. The underlying idea consists in
generalizing the standard adaptive biasing force method commonly used in
conjunction with molecular dynamics to handle in a more effective fashion
multidimensional reaction coordinates. The proposed approach is anticipated to
be particularly useful for reaction coordinates, the components of which are
weakly coupled, as illuminated in a mathematical analysis of the long-time
convergence of the algorithm. The strength as well as the intrinsic limitation
of the method are discussed and illustrated in two realistic test cases
- …
