465 research outputs found

    Randomly sparsified Richardson iteration is really fast

    Full text link
    Recently, a class of algorithms combining classical fixed point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as 10108×1010810^{108} \times 10^{108}. So far, a complete mathematical explanation for their success has proven elusive. Additionally, the methods have not been extended to linear system solves. In this paper we propose a new scheme based on repeated random sparsification that is capable of solving linear systems in extremely high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster-than-Monte Carlo convergence rate and justifies use of the scheme even when the solution vector itself is too large to store.Comment: 27 pages, 2 figure

    Rayleigh-Gauss-Newton optimization with enhanced sampling for variational Monte Carlo

    Full text link
    Variational Monte Carlo (VMC) is an approach for computing ground-state wavefunctions that has recently become more powerful due to the introduction of neural network-based wavefunction parametrizations. However, efficiently training neural wavefunctions to converge to an energy minimum remains a difficult problem. In this work, we analyze optimization and sampling methods used in VMC and introduce alterations to improve their performance. First, based on theoretical convergence analysis in a noiseless setting, we motivate a new optimizer that we call the Rayleigh-Gauss-Newton method, which can improve upon gradient descent and natural gradient descent to achieve superlinear convergence with little added computational cost. Second, in order to realize this favorable comparison in the presence of stochastic noise, we analyze the effect of sampling error on VMC parameter updates and experimentally demonstrate that it can be reduced by the parallel tempering method. In particular, we demonstrate that RGN can be made robust to energy spikes that occur when new regions of configuration space become available to the sampler over the course of optimization. Finally, putting theory into practice, we apply our enhanced optimization and sampling methods to the transverse-field Ising and XXZ models on large lattices, yielding ground-state energy estimates with remarkably high accuracy after just 200-500 parameter updates.Comment: 12 pages, 7 figure

    Randomized algorithms for low-rank matrix approximation: Design, analysis, and applications

    Full text link
    This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace iteration, and randomized block Krylov iteration. The paper compares the procedures via theoretical analyses and numerical studies to highlight how the best choice of algorithm depends on spectral properties of the matrix and the computational resources available. Despite superior performance for many problems, randomized block Krylov iteration has not been widely adopted in computational science. The paper strengthens the case for this method in three ways. First, it presents new pseudocode that can significantly reduce computational costs. Second, it provides a new analysis that yields simple, precise, and informative error bounds. Last, it showcases applications to challenging scientific problems, including principal component analysis for genetic data and spectral clustering for molecular dynamics data.Comment: 60 pages, 14 figure

    Improved Fast Randomized Iteration Approach to Full Configuration Interaction

    Full text link
    We present three modifications to our recently introduced fast randomized iteration method for full configuration interaction (FCI-FRI) and investigate their effects on the method's performance for Ne, H2_2O, and N2_2. The initiator approximation, originally developed for full configuration interaction quantum Monte Carlo, significantly reduces statistical error in FCI-FRI when few samples are used in compression operations, enabling its application to larger chemical systems. The semi-stochastic extension, which involves exactly preserving a fixed subset of elements in each compression, improves statistical efficiency in some cases but reduces it in others. We also developed a new approach to sampling excitations that yields consistent improvements in statistical efficiency and reductions in computational cost. We discuss possible strategies based on our findings for improving the performance of stochastic quantum chemistry methods more generally.Comment: 13 pages, 5 figure

    Approximating matrix eigenvalues by subspace iteration with repeated random sparsification

    Full text link
    Traditional numerical methods for calculating matrix eigenvalues are prohibitively expensive for high-dimensional problems. Iterative random sparsification methods allow for the estimation of a single dominant eigenvalue at reduced cost by leveraging repeated random sampling and averaging. We present a general approach to extending such methods for the estimation of multiple eigenvalues and demonstrate its performance for several benchmark problems in quantum chemistry.Comment: 31 pages, 7 figure

    Understanding and eliminating spurious modes in variational Monte Carlo using collective variables

    Full text link
    The use of neural network parametrizations to represent the ground state in variational Monte Carlo (VMC) calculations has generated intense interest in recent years. However, as we demonstrate in the context of the periodic Heisenberg spin chain, this approach can produce unreliable wave function approximations. One of the most obvious signs of failure is the occurrence of random, persistent spikes in the energy estimate during training. These energy spikes are caused by regions of configuration space that are over-represented by the wave function density, which are called ``spurious modes'' in the machine learning literature. After exploring these spurious modes in detail, we demonstrate that a collective-variable-based penalization yields a substantially more robust training procedure, preventing the formation of spurious modes and improving the accuracy of energy estimates. Because the penalization scheme is cheap to implement and is not specific to the particular model studied here, it can be extended to other applications of VMC where a reasonable choice of collective variable is available.Comment: 12 pages, 13 figure

    Error bounds for dynamical spectral estimation

    Full text link
    Dynamical spectral estimation is a well-established numerical approach for estimating eigenvalues and eigenfunctions of the Markov transition operator from trajectory data. Although the approach has been widely applied in biomolecular simulations, its error properties remain poorly understood. Here we analyze the error of a dynamical spectral estimation method called "the variational approach to conformational dynamics" (VAC). We bound the approximation error and estimation error for VAC estimates. Our analysis establishes VAC's convergence properties and suggests new strategies for tuning VAC to improve accuracy.Comment: 34 pages, 7 figure
    • …
    corecore