60 research outputs found

    A fast algorithm to solve systems of nonlinear equations

    Full text link
    [EN] A new HSS-based algorithm for solving systems of nonlinear equations is presented and its semilocal convergence is proved. Spectral properties of the new method are investigated. Performance profile for the new scheme is computed and compared with HSS algorithm. Besides, by a numerical example in which a two-dimensional nonlinear convection diffusion equation is solved, we compare the new method and the Newton-HSS method. Numerical results show that the new scheme solves the problem faster than the NewtonHSS scheme in terms of CPU -time and number of iterations. Moreover, the application of the new method is found to be fast, reliable, flexible, accurate, and has small CPU time.This research was partially supported by Ministerio de Economia y Competitividad, Spain under grants MTM2014-52016-C2-2-P and Generalitat Valenciana, Spain PROMETEO/2016/089.Amiri, A.; Cordero Barbero, A.; Darvishi, M.; Torregrosa Sánchez, JR. (2019). A fast algorithm to solve systems of nonlinear equations. Journal of Computational and Applied Mathematics. 354:242-258. https://doi.org/10.1016/j.cam.2018.03.048S24225835

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Spectral features of matrix-sequences, GLT, symbol, and application in preconditioning Krylov methods, image deblurring, and multigrid algorithms.

    Get PDF
    The final purpose of any scientific discipline can be regarded as the solution of real-world problems. With this aim, a mathematical modeling of the considered phenomenon is often compulsory. Closed-form solutions of the arising functional equations are usually not available and numerical discretization techniques are required. In this setting, the discretization of an infinite-dimensional linear equation via some linear approximation method, leads to a sequence of linear systems of increasing dimension whose coefficient matrices could inherit a structure from the continuous problem. For instance, the numerical approximation by local methods of constant or nonconstant coefficients systems of Partial Differential Equations (PDEs) over multidimensional domains, gives rise to multilevel block Toeplitz or to Generalized Locally Toeplitz (GLT) sequences, respectively. In the context of structured matrices, the convergence properties of iterative methods, like multigrid or preconditioned Krylov techniques, are strictly related to the notion of symbol, a function whose role relies in describing the asymptotical distribution of the spectrum. This thesis can be seen as a byproduct of the combined use of powerful tools like symbol, spectral distribution, and GLT, when dealing with the numerical solution of structured linear systems. We approach such an issue both from a theoretical and practical viewpoint. On the one hand, we enlarge some known spectral distribution tools by proving the eigenvalue distribution of matrix-sequences obtained as combination of some algebraic operations on multilevel block Toeplitz matrices. On the other hand, we take advantage of the obtained results for designing efficient preconditioning techniques. Moreover, we focus on the numerical solution of structured linear systems coming from the following applications: image deblurring, fractional diffusion equations, and coupled PDEs. A spectral analysis of the arising structured sequences allows us either to study the convergence and predict the behavior of preconditioned Krylov and multigrid methods applied to the coefficient matrices, or to design effective preconditioners and multigrid solvers for the associated linear systems

    Spectral features of matrix-sequences, GLT, symbol, and application in preconditioning Krylov methods, image deblurring, and multigrid algorithms.

    Get PDF
    The final purpose of any scientific discipline can be regarded as the solution of real-world problems. With this aim, a mathematical modeling of the considered phenomenon is often compulsory. Closed-form solutions of the arising functional equations are usually not available and numerical discretization techniques are required. In this setting, the discretization of an infinite-dimensional linear equation via some linear approximation method, leads to a sequence of linear systems of increasing dimension whose coefficient matrices could inherit a structure from the continuous problem. For instance, the numerical approximation by local methods of constant or nonconstant coefficients systems of Partial Differential Equations (PDEs) over multidimensional domains, gives rise to multilevel block Toeplitz or to Generalized Locally Toeplitz (GLT) sequences, respectively. In the context of structured matrices, the convergence properties of iterative methods, like multigrid or preconditioned Krylov techniques, are strictly related to the notion of symbol, a function whose role relies in describing the asymptotical distribution of the spectrum. This thesis can be seen as a byproduct of the combined use of powerful tools like symbol, spectral distribution, and GLT, when dealing with the numerical solution of structured linear systems. We approach such an issue both from a theoretical and practical viewpoint. On the one hand, we enlarge some known spectral distribution tools by proving the eigenvalue distribution of matrix-sequences obtained as combination of some algebraic operations on multilevel block Toeplitz matrices. On the other hand, we take advantage of the obtained results for designing efficient preconditioning techniques. Moreover, we focus on the numerical solution of structured linear systems coming from the following applications: image deblurring, fractional diffusion equations, and coupled PDEs. A spectral analysis of the arising structured sequences allows us either to study the convergence and predict the behavior of preconditioned Krylov and multigrid methods applied to the coefficient matrices, or to design effective preconditioners and multigrid solvers for the associated linear systems

    Numerical solution of saddle point problems

    Full text link

    Natural preconditioning and iterative methods for saddle point systems

    Get PDF
    The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness---in terms of rapidity of convergence---is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends

    Operator Splitting Based Dynamic Iteration for Linear Port-Hamiltonian Systems

    Full text link
    A dynamic iteration scheme for linear differential-algebraic port-Hamil\-tonian systems based on Lions-Mercier-type operator splitting methods is developed. The dynamic iteration is monotone in the sense that the error is decreasing and no stability conditions are required. The developed iteration scheme is even new for linear port-Hamiltonian systems. The obtained algorithm is applied to multibody systems and electrical networks.Comment: 29 pages, 6 figure

    Approximation and spectral analysis for large structured linear systems.

    Get PDF
    In this work we are interested in standard and less standard structured linear systems coming from applications in various _elds of computational mathematics and often modeled by integral and/or di_erential equations. Starting from classical Toeplitz and Circulant structures, we consider some extensions as g-Toeplitz and g-Circulants matrices appearing in several contexts in numerical analysis and applications. Then we consider special matrices arising from collocation methods for di_erential equations: also in this case, under suitable assumptions we observe a Toeplitz structure. More in detail we _rst propose a detailed study of singular values and eigenvalues of g-circulant matrices and then we provide an analysis of distribution of g-Toeplitz sequences. Furthermore, when possible, we consider Krylov space methods with special attention to the minimization of the computational work. When the involved dimensions are large, the Preconditioned Conjugate Gradient (PCG) method is recommended because of the much stronger robustness with respect to the propagation of errors. In that case, crucial issues are the convergence speed of this iterative solver, the use of special techniques (preconditioning, multilevel techniques) for accelerating the convergence, and a careful study of the spectral properties of such matrices. Finally, the use of radial basis functions allow of determining and studying the asymptotic behavior of the spectral radii of collocation matrices approximating elliptic boundary value problems
    corecore