9 research outputs found

    CGIHT: Conjugate Gradient Iterative Hard Thresholding\ud for Compressed Sensing and Matrix Completion

    Get PDF
    We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per iteration complexity of simple hard thresholding algorithms with the fast asymptotic convergence rate of employing the conjugate gradient method. We establish provable recovery guarantees and stability to noise for variants of CGIHT with sufficient conditions in terms of the restricted isometry constants of the sensing operators. Extensive empirical performance comparisons establish significant computational advantages for CGIHT both in terms of the size of problems which can be accurately approximated and in terms of overall computation time

    Michael James David Powell:29 July 1936-19 April 2015

    Get PDF
    Michael James David Powell was a British numerical analyst who was among the pioneers of computational mathematics. During a long and distinguished career, first at the Atomic Energy Research Establishment (AERE) Harwell and subsequently as the John Humphrey Plummer Professor of Applied Numerical Analysis in Cambridge, he contributed decisively towards establishing optimization theory as an effective tool of scientific enquiry, replete with highly effective methods and mathematical sophistication. He also made crucial contributions to approximation theory, in particular to the theory of spline functions and of radial basis functions. In a subject that roughly divides into practical designers of algorithms and theoreticians who seek to underpin algorithms with solid mathematical foundations, Mike Powell refused to follow this dichotomy. His achievements span the entire range from difficult and intricate convergence proofs to the design of algorithms and production of software. He was among the leaders of a subject area that is at the nexus of mathematical enquiry and applications throughout science and engineering.</jats:p

    A Bayesian conjugate gradient method (with Discussion)

    Get PDF
    A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about the numerical error. In this paper we propose a novel statistical model for this numerical error set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging

    Round-off error analysis of descent methods for solving linear equations

    Get PDF

    Choice of parameters in gradient methods for the unconstrained optimization problems

    Get PDF
    Posmatra se problem optimizacije bez ograničenja. Za rešavanje problema  optimizacije bez ograničenja postoji mnoštvo raznovrsnih metoda. Istraživanje ovde motivisano je potrebom za metodama koje će brzo konvergirati. Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička analiza mogućnosti uvođenja parametra u gradijentne metode. Najpre se razmatra problem minimizacije konveksne funkcije više promenljivih. Problem minimizacije konveksne funkcije više promenljivih ovde se rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u postojanju problema kod kojih je funkcija cilja rezultat simulacija. Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja određenog metoda optimizacije. Takođe se predstavlja jedan novi hibridni metod konjugovanog gradijenta, kod koga je parametar konjugovanog gradijenta konveksna kombinacija dva poznata parametra konjugovanog gradijenta. U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za praćenje preostalih glava. U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i drugog reda. Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata vezanih za metode konjugovanih gradijenata. Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se originalni doprinos predstavlja u trećoj, petoj i šestoj glavi. U trećoj glavi se opisuje izvesna modifikacija određenog metoda u kome se koristi multiplikativni parametar, izabran na slučajan način. Dokazuje se linearna konvergencija tako formiranog novog metoda. Peta glava sadrži originalne rezultate koji se odnose na metode konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi hibridni metod konjugovanih gradijenata, koji je konveksna kombinacija dva poznata metoda konjugovanih gradijenata. U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih na  izvesnom skupu test funkcija, koji se odnose na metode iz treće i pete glave. Implementacija svih razmatranih algoritama rađena je u paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada centralne procesorske jedinice.6The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems.  The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the  big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in  Chapter  6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth  chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this  method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU time.The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems.  The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the  big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in  Chapter  6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. Key  Words Documentation  97 The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth  chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this  method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU tim
    corecore