6,126 research outputs found
Novel Monte Carlo Methods for Large-Scale Linear Algebra Operations
Linear algebra operations play an important role in scientific computing and data analysis. With increasing data volume and complexity in the Big Data era, linear algebra operations are important tools to process massive datasets. On one hand, the advent of modern high-performance computing architectures with increasing computing power has greatly enhanced our capability to deal with a large volume of data. One the other hand, many classical, deterministic numerical linear algebra algorithms have difficulty to scale to handle large data sets.
Monte Carlo methods, which are based on statistical sampling, exhibit many attractive properties in dealing with large volume of datasets, including fast approximated results, memory efficiency, reduced data accesses, natural parallelism, and inherent fault tolerance. In this dissertation, we present new Monte Carlo methods to accommodate a set of fundamental and ubiquitous large-scale linear algebra operations, including solving large-scale linear systems, constructing low-rank matrix approximation, and approximating the extreme eigenvalues/ eigenvectors, across modern distributed and parallel computing architectures. First of all, we revisit the classical Ulam-von Neumann Monte Carlo algorithm and derive the necessary and sufficient condition for its convergence. To support a broad family of linear systems, we develop Krylov subspace Monte Carlo solvers that go beyond the use of Neumann series. New algorithms used in the Krylov subspace Monte Carlo solvers include (1) a Breakdown-Free Block Conjugate Gradient algorithm to address the potential rank deficiency problem occurred in block Krylov subspace methods; (2) a Block Conjugate Gradient for Least Squares algorithm to stably approximate the least squares solutions of general linear systems; (3) a BCGLS algorithm with deflation to gain convergence acceleration; and (4) a Monte Carlo Generalized Minimal Residual algorithm based on sampling matrix-vector products to provide fast approximation of solutions. Secondly, we design a rank-revealing randomized Singular Value Decomposition (R3SVD) algorithm for adaptively constructing low-rank matrix approximations to satisfy application-specific accuracy. Thirdly, we study the block power method on Markov Chain Monte Carlo transition matrices and find that the convergence is actually depending on the number of independent vectors in the block. Correspondingly, we develop a sliding window power method to find stationary distribution, which has demonstrated success in modeling stochastic luminal Calcium release site. Fourthly, we take advantage of hybrid CPU-GPU computing platforms to accelerate the performance of the Breakdown-Free Block Conjugate Gradient algorithm and the randomized Singular Value Decomposition algorithm. Finally, we design a Gaussian variant of Freivalds’ algorithm to efficiently verify the correctness of matrix-matrix multiplication while avoiding undetectable fault patterns encountered in deterministic algorithms
The PHMC algorithm for simulations of dynamical fermions: I -- description and properties
We give a detailed description of the so-called Polynomial Hybrid Monte Carlo
(PHMC) algorithm. The effects of the correction factor, which is introduced to
render the algorithm exact, are discussed, stressing their relevance for the
statistical fluctuations and (almost) zero mode contributions to physical
observables. We also investigate rounding-error effects and propose several
ways to reduce memory requirements.Comment: Latex2e file, 4 figures, 49 page
Eigenvalue Bounds on Restrictions of Reversible Nearly Uncoupled Markov Chains
AbstractIn this paper we analyze decompositions of reversible nearly uncoupled Markov chains into rapidly mixing subchains. We state upper bounds on the 2nd eigenvalue for restriction and stochastic complementation chains of reversible Markov chains, as well as a relation between them. We illustrate the obtained bounds analytically for bunkbed graphs, and furthermore apply them to restricted Markov chains that arise when analyzing conformation dynamics of a small biomolecule
A weak characterization of slow variables in stochastic dynamical systems
We present a novel characterization of slow variables for continuous Markov
processes that provably preserve the slow timescales. These slow variables are
known as reaction coordinates in molecular dynamical applications, where they
play a key role in system analysis and coarse graining. The defining
characteristics of these slow variables is that they parametrize a so-called
transition manifold, a low-dimensional manifold in a certain density function
space that emerges with progressive equilibration of the system's fast
variables. The existence of said manifold was previously predicted for certain
classes of metastable and slow-fast systems. However, in the original work, the
existence of the manifold hinges on the pointwise convergence of the system's
transition density functions towards it. We show in this work that a
convergence in average with respect to the system's stationary measure is
sufficient to yield reaction coordinates with the same key qualities. This
allows one to accurately predict the timescale preservation in systems where
the old theory is not applicable or would give overly pessimistic results.
Moreover, the new characterization is still constructive, in that it allows for
the algorithmic identification of a good slow variable. The improved
characterization, the error prediction and the variable construction are
demonstrated by a small metastable system
- …