52,463 research outputs found

    Notes on a 3-term Conjugacy Recurrence for the Iterative Solution of Symmetric Linear Systems

    Get PDF
    We consider a 3-term recurrence, namely CG_2step, for the iterative solution of symmetric linear systems. The new algorithm generates conjugate directions and extends some standard theoretical properties of the Conjugate Gradient (CG) method [10]. We prove the finite convergence of CG_2step, and we provide some error analysis. Then, we introduce preconditioning for CG_2step, and we prove that standard error bounds for the CG also hold for our proposal.Iterative methods, 3-term recurrences, Conjugate Gradient method, Error Analysis, Preconditioning

    A Framework of Conjugate Direction Methods for Symmetric Linear Systems in Optimization

    Get PDF
    In this paper, we introduce a parameter-dependent class of Krylov-based methods, namely Conjugate Directions (Formula Presented.), for the solution of symmetric linear systems. We give evidence that, in our proposal, we generate sequences of conjugate directions, extending some properties of the standard conjugate gradient (CG) method, in order to preserve the conjugacy. For specific values of the parameters in our framework, we obtain schemes equivalent to both the CG and the scaled-CG. We also prove the finite convergence of the algorithms in (Formula Presented.), and we provide some error analysis. Finally, preconditioning is introduced for (Formula Presented.), and we show that standard error bounds for the preconditioned CG also hold for the preconditioned (Formula Presented.).In this paper, we introduce a parameter-dependent class of Krylov-based methods, namely Conjugate Directions , for the solution of symmetric linear systems. We give evidence that, in our proposal, we generate sequences of conjugate directions, extending some properties of the standard conjugate gradient (CG) method, in order to preserve the conjugacy. For specific values of the parameters in our framework, we obtain schemes equivalent to both the CG and the scaled-CG. We also prove the finite convergence of the algorithms in , and we provide some error analysis. Finally, preconditioning is introduced for , and we show that standard error bounds for the preconditioned CG also hold for the preconditioned

    A New CG-Algorithm with Self-Scaling VM-Update for Unconstraint Optimization

    Get PDF
    In this paper, a new combined extended Conjugate-Gradient (CG) and Variable-Metric (VM) methods is proposed for solving unconstrained large-scale numerical optimization problems. The basic idea is to choose a combination of the current gradient and some pervious search directions as a new search direction updated by Al-Bayati\u27s SCVM-method to fit a new step-size parameter using Armijo Inexact Line Searches (ILS). This method is based on the ILS and its numerical properties are discussed using different non-linear test functions with various dimensions. The global convergence property of the new algorithm is investigated under few weak conditions. Numerical experiments show that the new algorithm seems to converge faster and is superior to some other similar methods in many situations

    ANALYSIS OF ITERATIVE METHODS FOR THE SOLUTION OF BOUNDARY INTEGRAL EQUATIONS WITH APPLICATIONS TO THE HELMHOLTZ PROBLEM

    Get PDF
    This thesis is concerned with the numerical solution of boundary integral equations and the numerical analysis of iterative methods. In the first part, we assume the boundary to be smooth in order to work with compact operators; while in the second part we investigate the problem arising from allowing piecewise smooth boundaries. Although in principle most results of the thesis apply to general problems of reformulating boundary value problems as boundary integral equations and their subsequent numerical solutions, we consider the Helmholtz equation arising from acoustic problems as the main model problem. In Chapter 1, we present the background material of reformulation of Helmhoitz boundary value problems into boundary integral equations by either the indirect potential method or the direct method using integral formulae. The problem of ensuring unique solutions of integral equations for exterior problems is specifically discussed. In Chapter 2, we discuss the useful numerical techniques for solving second kind integral equations. In particular, we highlight the superconvergence properties of iterated projection methods and the important procedure of Nystrom interpolation. In Chapter 3, the multigrid type methods as applied to smooth boundary integral equations are studied. Using the residual correction principle, we are able to propose some robust iterative variants modifying the existing methods to seek efficient solutions. In Chapter 4, we concentrate on the conjugate gradient method and establish its fast convergence as applied to the linear systems arising from general boundary element equations. For boundary integral equalisations on smooth boundaries we have observed, as the underlying mesh sizes decrease, faster convergence of multigrid type methods and fixed step convergence of the conjugate gradient method. In the case of non-smooth integral boundaries, we first derive the singular forms of the solution of boundary integral solutions for Dirichlet problems and then discuss the numerical solution in Chapter 5. Iterative methods such as two grid methods and the conjugate gradient method are successfully implemented in Chapter 6 to solve the non-smooth integral equations. The study of two grid methods in a general setting and also much of the results on the conjugate gradient method are new. Chapters 3, 4 and 5 are partially based on publications [4], [5] and [35] respectively.Department of Mathematics and Statistics, Polytechnic South Wes

    Global convergence properties of conjugate gradient methods for optimization

    Get PDF
    Projet PROMATHWe study the convergence of nonlinear conjugate gradient methods without restarts and with practical line searches. The analysis covers two classes of methods that are globally convergent on smooth, non convex functions. Some properties of the Fletcher-Reeves method play an important role in the first family, whereas the second family shares an important property with the Polak-Ribiere method. Numerical experiments are presented

    A rational conjugate gradient method for linear ill-conditioned problems

    Full text link
    We consider linear ill-conditioned operator equations in a Hilbert space setting. Motivated by the aggregation method, we consider approximate solutions constructed from linear combinations of Tikhonov regularization, which amounts to finding solutions in a rational Krylov space. By mixing these with usual Krylov spaces, we consider least-squares problem in these mixed rational spaces. Applying the Arnoldi method leads to a sparse, pentadiagonal representation of the forward operator, and we introduce the Lanczos method for solving the least-squares problem by factorizing this matrix. Finally, we present an equivalent conjugate-gradient-type method that does not rely on explicit orthogonalization but uses short-term recursions and Tikhonov regularization in each second step. We illustrate the convergence and regularization properties by some numerical examples
    • …
    corecore