38 research outputs found

    On block diagonal and block triangular iterative schemes and preconditioners for stabilized saddle point problems

    Get PDF
    We review the use of block diagonal and block lower/upper triangular splittings for constructing iterative methods and preconditioners for solving stabilized saddle point problems. We introduce new variants of these splittings and obtain new results on the convergence of the associated stationary iterations and new bounds on the eigenvalues of the corresponding preconditioned matrices. We further consider inexact versions as preconditioners for flexible Krylov subspace methods, and show experimentally that our techniques can be highly effective for solving linear systems of saddle point type arising from stabilized finite element discretizations of two model problems, one from incompressible fluid mechanics and the other from magnetostatics

    Totally Asynchronous Primal-Dual Convex Optimization in Blocks

    Full text link
    We present a parallelized primal-dual algorithm for solving constrained convex optimization problems. The algorithm is "block-based," in that vectors of primal and dual variables are partitioned into blocks, each of which is updated only by a single processor. We consider four possible forms of asynchrony: in updates to primal variables, updates to dual variables, communications of primal variables, and communications of dual variables. We construct a family of explicit counterexamples to show the need to eliminate asynchronous communication of dual variables, though the other forms of asynchrony are permitted, all without requiring bounds on delays. A first-order primal-dual update law is developed and shown to be robust to asynchrony. We then derive convergence rates to a Lagrangian saddle point in terms of the operations agents execute, without specifying any timing or pattern with which they must be executed. These convergence rates include an "asynchrony penalty" that we quantify and present ways to mitigate. Numerical results illustrate these developments.Comment: arXiv admin note: text overlap with arXiv:2004.0514

    Accelerated generalized SOR method for a class of complex systems of linear equations

    Get PDF
    For solving a broad class of complex symmetric linear systems, recently Salkuyeh et al. recast the system in a real formulation and studied a generalized successive overrelaxation (GSOR) iterative method. In this paper, we introduce an accelerated GSOR (AGSOR) iterative method which involves two iteration parameters. Then, we theoretically study its convergence properties and determine its optimal iteration parameters and corresponding optimal convergence factor. Finally, some numerical computations are presented to validate the theoretical results and compare the performance of the AGSOR method with those of the GSOR and MHSS methods
    corecore