890,670 research outputs found

    Functional Methods in Stochastic Systems

    Full text link
    Field-theoretic construction of functional representations of solutions of stochastic differential equations and master equations is reviewed. A generic expression for the generating function of Green functions of stochastic systems is put forward. Relation of ambiguities in stochastic differential equations and in the functional representations is discussed. Ordinary differential equations for expectation values and correlation functions are inferred with the aid of a variational approach.Comment: Plenary talk presented at Mathematical Modeling and Computational Science. International Conference, MMCP 2011, Star\'a Lesn\'a, Slovakia, July 4-8, 201

    Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization

    Full text link
    In this paper, we present a new stochastic algorithm, namely the stochastic block mirror descent (SBMD) method for solving large-scale nonsmooth and stochastic optimization problems. The basic idea of this algorithm is to incorporate the block-coordinate decomposition and an incremental block averaging scheme into the classic (stochastic) mirror-descent method, in order to significantly reduce the cost per iteration of the latter algorithm. We establish the rate of convergence of the SBMD method along with its associated large-deviation results for solving general nonsmooth and stochastic optimization problems. We also introduce different variants of this method and establish their rate of convergence for solving strongly convex, smooth, and composite optimization problems, as well as certain nonconvex optimization problems. To the best of our knowledge, all these developments related to the SBMD methods are new in the stochastic optimization literature. Moreover, some of our results also seem to be new for block coordinate descent methods for deterministic optimization

    Bold Diagrammatic Monte Carlo in the Lens of Stochastic Iterative Methods

    Full text link
    This work aims at understanding of bold diagrammatic Monte Carlo (BDMC) methods for stochastic summation of Feynman diagrams from the angle of stochastic iterative methods. The convergence enhancement trick of the BDMC is investigated from the analysis of condition number and convergence of the stochastic iterative methods. Numerical experiments are carried out for model systems to compare the BDMC with related stochastic iterative approaches

    Numerical Methods for Stochastic Differential Equations

    Full text link
    Stochastic differential equations (sdes) play an important role in physics but existing numerical methods for solving such equations are of low accuracy and poor stability. A general strategy for developing accurate and efficient schemes for solving stochastic equations in outlined here. High order numerical methods are developed for integration of stochastic differential equations with strong solutions. We demonstrate the accuracy of the resulting integration schemes by computing the errors in approximate solutions for sdes which have known exact solutions

    Introduction to stochastic error correction methods

    Full text link
    We propose a method for eliminating the truncation error associated with any subspace diagonalization calculation. The new method, called stochastic error correction, uses Monte Carlo sampling to compute the contribution of the remaining basis vectors not included in the initial diagonalization. The method is part of a new approach to computational quantum physics which combines both diagonalization and Monte Carlo techniques.Comment: 11 pages, 1 figur

    Hybrid Deterministic-Stochastic Methods for Data Fitting

    Full text link
    Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum. These methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.Comment: 26 pages. Revised proofs of Theorems 2.6 and 3.1, results unchange
    corecore