1,713 research outputs found
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
Data Assimilation using a GPU Accelerated Path Integral Monte Carlo Approach
The answers to data assimilation questions can be expressed as path integrals
over all possible state and parameter histories. We show how these path
integrals can be evaluated numerically using a Markov Chain Monte Carlo method
designed to run in parallel on a Graphics Processing Unit (GPU). We demonstrate
the application of the method to an example with a transmembrane voltage time
series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron
model. By taking advantage of GPU computing, we gain a parallel speedup factor
of up to about 300, compared to an equivalent serial computation on a CPU, with
performance increasing as the length of the observation time used for data
assimilation increases.Comment: 5 figures, submitted to Journal of Computational Physic
Novel Monte Carlo Methods for Large-Scale Linear Algebra Operations
Linear algebra operations play an important role in scientific computing and data analysis. With increasing data volume and complexity in the Big Data era, linear algebra operations are important tools to process massive datasets. On one hand, the advent of modern high-performance computing architectures with increasing computing power has greatly enhanced our capability to deal with a large volume of data. One the other hand, many classical, deterministic numerical linear algebra algorithms have difficulty to scale to handle large data sets.
Monte Carlo methods, which are based on statistical sampling, exhibit many attractive properties in dealing with large volume of datasets, including fast approximated results, memory efficiency, reduced data accesses, natural parallelism, and inherent fault tolerance. In this dissertation, we present new Monte Carlo methods to accommodate a set of fundamental and ubiquitous large-scale linear algebra operations, including solving large-scale linear systems, constructing low-rank matrix approximation, and approximating the extreme eigenvalues/ eigenvectors, across modern distributed and parallel computing architectures. First of all, we revisit the classical Ulam-von Neumann Monte Carlo algorithm and derive the necessary and sufficient condition for its convergence. To support a broad family of linear systems, we develop Krylov subspace Monte Carlo solvers that go beyond the use of Neumann series. New algorithms used in the Krylov subspace Monte Carlo solvers include (1) a Breakdown-Free Block Conjugate Gradient algorithm to address the potential rank deficiency problem occurred in block Krylov subspace methods; (2) a Block Conjugate Gradient for Least Squares algorithm to stably approximate the least squares solutions of general linear systems; (3) a BCGLS algorithm with deflation to gain convergence acceleration; and (4) a Monte Carlo Generalized Minimal Residual algorithm based on sampling matrix-vector products to provide fast approximation of solutions. Secondly, we design a rank-revealing randomized Singular Value Decomposition (R3SVD) algorithm for adaptively constructing low-rank matrix approximations to satisfy application-specific accuracy. Thirdly, we study the block power method on Markov Chain Monte Carlo transition matrices and find that the convergence is actually depending on the number of independent vectors in the block. Correspondingly, we develop a sliding window power method to find stationary distribution, which has demonstrated success in modeling stochastic luminal Calcium release site. Fourthly, we take advantage of hybrid CPU-GPU computing platforms to accelerate the performance of the Breakdown-Free Block Conjugate Gradient algorithm and the randomized Singular Value Decomposition algorithm. Finally, we design a Gaussian variant of Freivalds’ algorithm to efficiently verify the correctness of matrix-matrix multiplication while avoiding undetectable fault patterns encountered in deterministic algorithms
Variational Quantum Monte Carlo Method with a Neural-Network Ansatz for Open Quantum Systems
The possibility to simulate the properties of many-body open quantum systems
with a large number of degrees of freedom is the premise to the solution of
several outstanding problems in quantum science and quantum information. The
challenge posed by this task lies in the complexity of the density matrix
increasing exponentially with the system size. Here, we develop a variational
method to efficiently simulate the non-equilibrium steady state of Markovian
open quantum systems based on variational Monte Carlo and on a neural network
representation of the density matrix. Thanks to the stochastic reconfiguration
scheme, the application of the variational principle is translated into the
actual integration of the quantum master equation. We test the effectiveness of
the method by modeling the two-dimensional dissipative XYZ spin model on a
lattice
- …