209 research outputs found

    Reaching the superlinear convergence phase of the CG method

    Get PDF
    The rate of convergence of the conjugate gradient method takes place in essen- tially three phases, with respectively a sublinear, a linear and a superlinear rate. The paper examines when the superlinear phase is reached. To do this, two methods are used. One is based on the K-condition number, thereby separating the eigenval- ues in three sets: small and large outliers and intermediate eigenvalues. The other is based on annihilating polynomials for the eigenvalues and, assuming various an- alytical distributions of them, thereby using certain refined estimates. The results are illustrated for some typical distributions of eigenvalues and with some numerical tests

    Inexact inner-outer Golub-Kahan bidiagonalization method: A relaxation strategy

    Full text link
    We study an inexact inner-outer generalized Golub-Kahan algorithm for the solution of saddle-point problems with a two-times-two block structure. In each outer iteration, an inner system has to be solved which in theory has to be done exactly. Whenever the system is getting large, an inner exact solver is, however, no longer efficient or even feasible and iterative methods must be used. We focus this article on a numerical study showing the influence of the accuracy of an inner iterative solution on the accuracy of the solution of the block system. Emphasis is further given on reducing the computational cost, which is defined as the total number of inner iterations. We develop relaxation techniques intended to dynamically change the inner tolerance for each outer iteration to further minimize the total number of inner iterations. We illustrate our findings on a Stokes problem and validate them on a mixed formulation of the Poisson problem.Comment: 25 pages, 9 figure

    Hessian Averaging in Stochastic Newton Methods Achieves Superlinear Convergence

    Full text link
    We consider minimizing a smooth and strongly convex objective function using a stochastic Newton method. At each iteration, the algorithm is given an oracle access to a stochastic estimate of the Hessian matrix. The oracle model includes popular algorithms such as the Subsampled Newton and Newton Sketch, which can efficiently construct stochastic Hessian estimates for many tasks. Despite using second-order information, these existing methods do not exhibit superlinear convergence, unless the stochastic noise is gradually reduced to zero during the iteration, which would lead to a computational blow-up in the per-iteration cost. We address this limitation with Hessian averaging: instead of using the most recent Hessian estimate, our algorithm maintains an average of all past estimates. This reduces the stochastic noise while avoiding the computational blow-up. We show that this scheme enjoys local QQ-superlinear convergence with a non-asymptotic rate of (Υlog(t)/t)t(\Upsilon\sqrt{\log (t)/t}\,)^{t}, where Υ\Upsilon is proportional to the level of stochastic noise in the Hessian oracle. A potential drawback of this (uniform averaging) approach is that the averaged estimates contain Hessian information from the global phase of the iteration, i.e., before the iterates converge to a local neighborhood. This leads to a distortion that may substantially delay the superlinear convergence until long after the local neighborhood is reached. To address this drawback, we study a number of weighted averaging schemes that assign larger weights to recent Hessians, so that the superlinear convergence arises sooner, albeit with a slightly slower rate. Remarkably, we show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage, and still enjoys a superlinear convergence~rate nearly (up to a logarithmic factor) matching that of uniform Hessian averaging.Comment: 40 pages, 16 figure

    Weak KAM for commuting Hamiltonians

    Full text link
    For two commuting Tonelli Hamiltonians, we recover the commutation of the Lax-Oleinik semi-groups, a result of Barles and Tourin ([BT01]), using a direct geometrical method (Stoke's theorem). We also obtain a "generalization" of a theorem of Maderna ([Mad02]). More precisely, we prove that if the phase space is the cotangent of a compact manifold then the weak KAM solutions (or viscosity solutions of the critical stationary Hamilton-Jacobi equation) for G and for H are the same. As a corrolary we obtain the equality of the Aubry sets, of the Peierls barrier and of flat parts of Mather's α\alpha functions. This is also related to works of Sorrentino ([Sor09]) and Bernard ([Ber07b]).Comment: 23 pages, accepted for publication in NonLinearity (january 29th 2010). Minor corrections, fifth part added on Mather's α\alpha function (or effective Hamiltonian
    corecore