227 research outputs found

    Reaching the superlinear convergence phase of the CG method

    Get PDF
    The rate of convergence of the conjugate gradient method takes place in essen- tially three phases, with respectively a sublinear, a linear and a superlinear rate. The paper examines when the superlinear phase is reached. To do this, two methods are used. One is based on the K-condition number, thereby separating the eigenval- ues in three sets: small and large outliers and intermediate eigenvalues. The other is based on annihilating polynomials for the eigenvalues and, assuming various an- alytical distributions of them, thereby using certain refined estimates. The results are illustrated for some typical distributions of eigenvalues and with some numerical tests

    DISH: A Distributed Hybrid Optimization Method Leveraging System Heterogeneity

    Full text link
    We study distributed optimization problems over multi-agent networks, including consensus and network flow problems. Existing distributed methods neglect the heterogeneity among agents' computational capabilities, limiting their effectiveness. To address this, we propose DISH, a distributed hybrid method that leverages system heterogeneity. DISH allows agents with higher computational capabilities or lower computational costs to perform local Newton-type updates while others adopt simpler gradient-type updates. Notably, DISH covers existing methods like EXTRA, DIGing, and ESOM-0 as special cases. To analyze DISH's performance with general update directions, we formulate distributed problems as minimax problems and introduce GRAND (gradient-related ascent and descent) and its alternating version, Alt-GRAND, for solving these problems. GRAND generalizes DISH to centralized minimax settings, accommodating various descent ascent update directions, including gradient-type, Newton-type, scaled gradient, and other general directions, within acute angles to the partial gradients. Theoretical analysis establishes global sublinear and linear convergence rates for GRAND and Alt-GRAND in strongly-convex-nonconcave and strongly-convex-PL settings, providing linear rates for DISH. In addition, we derive the local superlinear convergence of Newton-based variations of GRAND in centralized settings. Numerical experiments validate the effectiveness of our methods

    Projected Newton methods and optimization of multicommodity flows

    Get PDF
    Bibliography: p. 26-28."August 1981."Partial support provided by the National Science Foundation Grant ECS-79-20834 Defense Advanced Research Project Agency Grant ONR-N00014-75-C-1183by Dimitri P. Bertsekas and Eli M. Gafni
    corecore