232 research outputs found
Non-reversible Parallel Tempering for Deep Posterior Approximation
Parallel tempering (PT), also known as replica exchange, is the go-to
workhorse for simulations of multi-modal distributions. The key to the success
of PT is to adopt efficient swap schemes. The popular deterministic even-odd
(DEO) scheme exploits the non-reversibility property and has successfully
reduced the communication cost from to given sufficiently many
chains. However, such an innovation largely disappears in big data due to
the limited chains and few bias-corrected swaps. To handle this issue, we
generalize the DEO scheme to promote non-reversibility and propose a few
solutions to tackle the underlying bias caused by the geometric stopping time.
Notably, in big data scenarios, we obtain an appealing communication cost
based on the optimal window size. In addition, we also adopt
stochastic gradient descent (SGD) with large and constant learning rates as
exploration kernels. Such a user-friendly nature enables us to conduct
approximation tasks for complex posteriors without much tuning costs.Comment: Accepted by AAAI 202
Molecular Dynamics Simulation
Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate âfirst-principlesâ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardlyâdealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...
Approximating matrix eigenvalues by subspace iteration with repeated random sparsification
Traditional numerical methods for calculating matrix eigenvalues are
prohibitively expensive for high-dimensional problems. Iterative random
sparsification methods allow for the estimation of a single dominant eigenvalue
at reduced cost by leveraging repeated random sampling and averaging. We
present a general approach to extending such methods for the estimation of
multiple eigenvalues and demonstrate its performance for several benchmark
problems in quantum chemistry.Comment: 31 pages, 7 figure
Manifold Learning in Atomistic Simulations: A Conceptual Review
Analyzing large volumes of high-dimensional data requires dimensionality
reduction: finding meaningful low-dimensional structures hidden in their
high-dimensional observations. Such practice is needed in atomistic simulations
of complex systems where even thousands of degrees of freedom are sampled. An
abundance of such data makes gaining insight into a specific physical problem
strenuous. Our primary aim in this review is to focus on unsupervised machine
learning methods that can be used on simulation data to find a low-dimensional
manifold providing a collective and informative characterization of the studied
process. Such manifolds can be used for sampling long-timescale processes and
free-energy estimation. We describe methods that can work on datasets from
standard and enhanced sampling atomistic simulations. Unlike recent reviews on
manifold learning for atomistic simulations, we consider only methods that
construct low-dimensional manifolds based on Markov transition probabilities
between high-dimensional samples. We discuss these techniques from a conceptual
point of view, including their underlying theoretical frameworks and possible
limitations
- âŠ