2,452 research outputs found
Local density of states on a vibrational quantum dot out of equilibrium
We calculate the nonequilibrium local density of states on a vibrational
quantum dot coupled to two electrodes at T=0 using a numerically exact
diagrammatic Monte Carlo method. Our focus is on the interplay between the
electron-phonon interaction strength and the bias voltage. We find that the
spectral density exhibits a significant voltage dependence if the voltage
window includes one or more phonon sidebands. A comparison with
well-established approximate approaches indicates that this effect could be
attributed to the nonequilibrium distribution of the phonons. Moreover, we
discuss the long transient dynamics caused by the electron-phonon coupling.Comment: 9 pages, 11 figure
Predicting Relative Binding Affinity Using Nonequilibrium QM/MM Simulations
Calculating binding free energies with quan-tum-mechanical (QM) methods is notoriously time-consum-ing. In this work, we studied whether such calculations can beaccelerated by using nonequilibrium (NE) moleculardynamics simulations employing Jarzynski’s equality. Westudied the binding of nine cyclic carboxylate ligands to theocta-acid deep-cavity host from the SAMPL4 challenge withthe reference potential approach. The binding free energieswere first calculated at the molecular mechanics (MM) levelwith free energy perturbation using the generalized Amberforce field with restrained electrostatic potential charges forthe host and the ligands. Then the free energy corrections for going from the MM Hamiltonian to a hybrid QM/MM Hamiltonian were estimated by averaging over many short NE molecular dynamics simulations. In the QM/MM calculations, the ligand was described at the semiempirical PM6-DH+ level. We show that this approach yields MM → QM/MM free energy corrections that agree with those from other approaches within statistical uncertainties. The desired precision can be obtained by running a proper number of independent NE simulations. For the systems studied in this work, a total simulation length of 20 ps was appropriate for most of the ligands, and 36−324 simulations were necessary in order to reach a precision of 0.3 kJ/ mol
Calculation of absolute free energy of binding for theophylline and its analogs to RNA aptamer using nonequilibrium work values
The massively parallel computation of absolute binding free energy with a
well-equilibrated system (MP-CAFEE) has been developed [H. Fujitani, Y. Tanida,
M. Ito, G. Jayachandran, C. D. Snow, M. R. Shirts, E. J. Sorin, and V. S.
Pande, J. Chem. Phys. , 084108 (2005)]. As an application, we
perform the binding affinity calculations of six theophylline-related ligands
with RNA aptamer. Basically, our method is applicable when using many compute
nodes to accelerate simulations, thus a parallel computing system is also
developed. To further reduce the computational cost, the adequate non-uniform
intervals of coupling constant , connecting two equilibrium states,
namely bound and unbound, are determined. The absolute binding energies thus obtained have effective linear relation between the computed and
experimental values. If the results of two other different methods are
compared, thermodynamic integration (TI) and molecular mechanics
Poisson-Boltzmann surface area (MM-PBSA) by the paper of Gouda [H.
Gouda, I. D. Kuntz, D. A. Case, and P. A. Kollman, Biopolymers , 16
(2003)], the predictive accuracy of the relative values is
almost comparable to that of TI: the correlation coefficients (R) obtained are
0.99 (this work), 0.97 (TI), and 0.78 (MM-PBSA). On absolute binding energies
meanwhile, a constant energy shift of -7 kcal/mol against the
experimental values is evident. To solve this problem, several presumable
reasons are investigated.Comment: 23 pages including 6 figure
Multidimensional integration through Markovian sampling under steered function morphing: a physical guise from statistical mechanics
We present a computational strategy for the evaluation of multidimensional
integrals on hyper-rectangles based on Markovian stochastic exploration of the
integration domain while the integrand is being morphed by starting from an
initial appropriate profile. Thanks to an abstract reformulation of Jarzynski's
equality applied in stochastic thermodynamics to evaluate the free-energy
profiles along selected reaction coordinates via non-equilibrium
transformations, it is possible to cast the original integral into the
exponential average of the distribution of the pseudo-work (that we may term
"computational work") involved in doing the function morphing, which is
straightforwardly solved. Several tests illustrate the basic implementation of
the idea, and show its performance in terms of computational time, accuracy and
precision. The formulation for integrand functions with zeros and possible sign
changes is also presented. It will be stressed that our usage of Jarzynski's
equality shares similarities with a practice already known in statistics as
Annealed Importance Sampling (AIS), when applied to computation of the
normalizing constants of distributions. In a sense, here we dress the AIS with
its "physical" counterpart borrowed from statistical mechanics.Comment: 3 figures Supplementary Material (pdf file named "JEMDI_SI.pdf"
Using nonequilibrium fluctuation theorems to understand and correct errors in equilibrium and nonequilibrium discrete Langevin dynamics simulations
Common algorithms for computationally simulating Langevin dynamics must
discretize the stochastic differential equations of motion. These resulting
finite time step integrators necessarily have several practical issues in
common: Microscopic reversibility is violated, the sampled stationary
distribution differs from the desired equilibrium distribution, and the work
accumulated in nonequilibrium simulations is not directly usable in estimators
based on nonequilibrium work theorems. Here, we show that even with a
time-independent Hamiltonian, finite time step Langevin integrators can be
thought of as a driven, nonequilibrium physical process. Once an appropriate
work-like quantity is defined -- here called the shadow work -- recently
developed nonequilibrium fluctuation theorems can be used to measure or correct
for the errors introduced by the use of finite time steps. In particular, we
demonstrate that amending estimators based on nonequilibrium work theorems to
include this shadow work removes the time step dependent error from estimates
of free energies. We also quantify, for the first time, the magnitude of
deviations between the sampled stationary distribution and the desired
equilibrium distribution for equilibrium Langevin simulations of solvated
systems of varying size. While these deviations can be large, they can be
eliminated altogether by Metropolization or greatly diminished by small
reductions in the time step. Through this connection with driven processes,
further developments in nonequilibrium fluctuation theorems can provide
additional analytical tools for dealing with errors in finite time step
integrators.Comment: 11 pages, 4 figure
Simulating rare events using a Weighted Ensemble-based string method
We introduce an extension to the Weighted Ensemble (WE) path sampling method
to restrict sampling to a one dimensional path through a high dimensional phase
space. Our method, which is based on the finite-temperature string method,
permits efficient sampling of both equilibrium and non-equilibrium systems.
Sampling obtained from the WE method guides the adaptive refinement of a
Voronoi tessellation of order parameter space, whose generating points, upon
convergence, coincide with the principle reaction pathway. We demonstrate the
application of this method to several simple, two-dimensional models of driven
Brownian motion and to the conformational change of the nitrogen regulatory
protein C receiver domain using an elastic network model. The simplicity of the
two-dimensional models allows us to directly compare the efficiency of the WE
method to conventional brute force simulations and other path sampling
algorithms, while the example of protein conformational change demonstrates how
the method can be used to efficiently study transitions in the space of many
collective variables
Convergence of large deviation estimators
We study the convergence of statistical estimators used in the estimation of
large deviation functions describing the fluctuations of equilibrium,
nonequilibrium, and manmade stochastic systems. We give conditions for the
convergence of these estimators with sample size, based on the boundedness or
unboundedness of the quantity sampled, and discuss how statistical errors
should be defined in different parts of the convergence region. Our results
shed light on previous reports of 'phase transitions' in the statistics of free
energy estimators and establish a general framework for reliably estimating
large deviation functions from simulation and experimental data and identifying
parameter regions where this estimation converges.Comment: 13 pages, 6 figures. v2: corrections focusing the paper on large
deviations; v3: minor corrections, close to published versio
Measuring the convergence of Monte Carlo free energy calculations
The nonequilibrium work fluctuation theorem provides the way for calculations
of (equilibrium) free energy based on work measurements of nonequilibrium,
finite-time processes and their reversed counterparts by applying Bennett's
acceptance ratio method. A nice property of this method is that each free
energy estimate readily yields an estimate of the asymptotic mean square error.
Assuming convergence, it is easy to specify the uncertainty of the results.
However, sample sizes have often to be balanced with respect to experimental or
computational limitations and the question arises whether available samples of
work values are sufficiently large in order to ensure convergence. Here, we
propose a convergence measure for the two-sided free energy estimator and
characterize some of its properties, explain how it works, and test its
statistical behavior. In total, we derive a convergence criterion for Bennett's
acceptance ratio method.Comment: 14 pages, 17 figure
Comparison of Equilibrium and Nonequilibrium Approaches for Relative Binding Free Energy Predictions
Alchemical relative binding free energy calculations have recently found important applications in drug optimization. A series of congeneric compounds are generated from a preidentified lead compound, and their relative binding affinities to a protein are assessed in order to optimize candidate drugs. While methods based on equilibrium thermodynamics have been extensively studied, an approach based on nonequilibrium methods has recently been reported together with claims of its superiority. However, these claims pay insufficient attention to the basis and reliability of both methods. Here we report a comparative study of the two approaches across a large data set, comprising more than 500 ligand transformations spanning in excess of 300 ligands binding to a set of 14 diverse protein targets. Ensemble methods are essential to quantify the uncertainty in these calculations, not only for the reasons already established in the equilibrium approach but also to ensure that the nonequilibrium calculations reside within their domain of validity. If and only if ensemble methods are applied, we find that the nonequilibrium method can achieve accuracy and precision comparable to those of the equilibrium approach. Compared to the equilibrium method, the nonequilibrium approach can reduce computational costs but introduces higher computational complexity and longer wall clock times. There are, however, cases where the standard length of a nonequilibrium transition is not sufficient, necessitating a complete rerun of the entire set of transitions. This significantly increases the computational cost and proves to be highly inconvenient during large-scale applications. Our findings provide a key set of recommendations that should be adopted for the reliable implementation of nonequilibrium approaches to relative binding free energy calculations in ligand-protein systems
- …