2 research outputs found
On the convergence of spectral deferred correction methods
In this work we analyze the convergence properties of the Spectral Deferred
Correction (SDC) method originally proposed by Dutt et al. [BIT, 40 (2000), pp.
241--266]. The framework for this high-order ordinary differential equation
(ODE) solver is typically described wherein a low-order approximation (such as
forward or backward Euler) is lifted to higher order accuracy by applying the
same low-order method to an error equation and then adding in the resulting
defect to correct the solution. Our focus is not on solving the error equation
to increase the order of accuracy, but on rewriting the solver as an iterative
Picard integral equation solver. In doing so, our chief finding is that it is
not the low-order solver that picks up the order of accuracy with each
correction, but it is the underlying quadrature rule of the right hand side
function that is solely responsible for picking up additional orders of
accuracy. Our proofs point to a total of three sources of errors that SDC
methods carry: the error at the current time point, the error from the previous
iterate, and the numerical integration error that comes from the total number
of quadrature nodes used for integration. The second of these two sources of
errors is what separates SDC methods from Picard integral equation methods; our
findings indicate that as long as difference between the current and previous
iterate always gets multiplied by at least a constant multiple of the time step
size, then high-order accuracy can be found even if the underlying "solver" is
inconsistent the underlying ODE. From this vantage, we solidify the prospects
of extending spectral deferred correction methods to a larger class of solvers
to which we present some examples.Comment: 29 page
Convergence and stability analysis of stochastic optimization algorithms
This thesis is concerned with stochastic optimization methods. The pioneering work in the field is the article “A stochastic approximation algorithm” by Robbins and Monro [1], in which they proposed the stochastic gradient descent; a stochastic version of the classical gradient descent algorithm. Since then, many improvements and extensions of the theory have been published, as well as new versions of the original algorithm. Despite this, a problem that many stochastic algorithms still share, is the sensitivity to the choice of the step size/learning rate. One can view the stochastic gradient descent algorithm as a stochastic version of the explicit Euler scheme applied to the gradient flow equation. There are other schemes for solving differential equations numerically that allow for larger step sizes. In this thesis, we investigate the properties of some of these methods, and how they perform, when applied to stochastic optimization problems