1,053,131 research outputs found

    Variance Reduction and Cluster Decomposition

    Get PDF
    It is a common problem in lattice QCD calculation of the mass of the hadron with an annihilation channel that the signal falls off in time while the noise remains constant. In addition, the disconnected insertion calculation of the three-point function and the calculation of the neutron electric dipole moment with the θ\theta term suffer from a noise problem due to the V\sqrt{V} fluctuation. We identify these problems to have the same origin and the V\sqrt{V} problem can be overcome by utilizing the cluster decomposition principle. We demonstrate this by considering the calculations of the glueball mass, the strangeness content in the nucleon, and the CP violation angle in the nucleon due to the θ\theta term. It is found that for lattices with physical sizes of 4.5 - 5.5 fm, the statistical errors of these quantities can be reduced by a factor of 3 to 4. The systematic errors can be estimated from the Akaike information criterion. For the strangeness content, we find that the systematic error is of the same size as that of the statistical one when the cluster decomposition principle is utilized. This results in a 2 to 3 times reduction in the overall error.Comment: 7 pages, 5 figures, appendix added to address the systematic erro

    Variance reduction in MCMC

    Get PDF
    We propose a general purpose variance reduction technique for MCMC estimators. The idea is obtained by combining standard variance reduction principles known for regular Monte Carlo simulations (Ripley, 1987) and the Zero-Variance principle introduced in the physics literature (Assaraf and Caffarel, 1999). The potential of the new idea is illustrated with some toy examples and an application to Bayesian estimationMarkov chain Monte carlo, Metropolis-Hastings algorithm, Variance reduction, Zero-Variance principle

    Online Variance Reduction for Stochastic Optimization

    Full text link
    Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling datapoints, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.Comment: COLT 201

    Accelerated Stochastic ADMM with Variance Reduction

    Full text link
    Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM and SVRG-ADMM,but the convergence is still suboptimal w.r.t the smoothness constant. In this paper, we propose a new accelerated stochastic ADMM algorithm with variance reduction, which enjoys a faster convergence than all the other stochastic ADMM algorithms. We theoretically analyze its convergence rate and show its dependence on the smoothness constant is optimal. We also empirically validate its effectiveness and show its priority over other stochastic ADMM algorithms
    • …
    corecore