9 research outputs found
CGIHT: Conjugate Gradient Iterative Hard Thresholding\ud for Compressed Sensing and Matrix Completion
We introduce the Conjugate Gradient Iterative Hard Thresholding (CGIHT) family of algorithms for the efficient solution of constrained underdetermined linear systems of equations arising in compressed sensing, row sparse approximation, and matrix completion. CGIHT is designed to balance the low per iteration complexity of simple hard thresholding algorithms with the fast asymptotic convergence rate of employing the conjugate gradient method. We establish provable recovery guarantees and stability to noise for variants of CGIHT with sufficient conditions in terms of the restricted isometry constants of the sensing operators. Extensive empirical performance comparisons establish significant computational advantages for CGIHT both in terms of the size of problems which can be accurately approximated and in terms of overall computation time
Michael James David Powell:29 July 1936-19 April 2015
Michael James David Powell was a British numerical analyst who was among the pioneers of computational mathematics. During a long and distinguished career, first at the Atomic Energy Research Establishment (AERE) Harwell and subsequently as the John Humphrey Plummer Professor of Applied Numerical Analysis in Cambridge, he contributed decisively towards establishing optimization theory as an effective tool of scientific enquiry, replete with highly effective methods and mathematical sophistication. He also made crucial contributions to approximation theory, in particular to the theory of spline functions and of radial basis functions. In a subject that roughly divides into practical designers of algorithms and theoreticians who seek to underpin algorithms with solid mathematical foundations, Mike Powell refused to follow this dichotomy. His achievements span the entire range from difficult and intricate convergence proofs to the design of algorithms and production of software. He was among the leaders of a subject area that is at the nexus of mathematical enquiry and applications throughout science and engineering.</jats:p
A Bayesian conjugate gradient method (with Discussion)
A fundamental task in numerical computation is the solution of large linear
systems. The conjugate gradient method is an iterative method which offers
rapid convergence to the solution, particularly when an effective
preconditioner is employed. However, for more challenging systems a substantial
error can be present even after many iterations have been performed. The
estimates obtained in this case are of little value unless further information
can be provided about the numerical error. In this paper we propose a novel
statistical model for this numerical error set in a Bayesian framework. Our
approach is a strict generalisation of the conjugate gradient method, which is
recovered as the posterior mean for a particular choice of prior. The estimates
obtained are analysed with Krylov subspace methods and a contraction result for
the posterior is presented. The method is then analysed in a simulation study
as well as being applied to a challenging problem in medical imaging
Choice of parameters in gradient methods for the unconstrained optimization problems
Posmatra se problem optimizacije bez ograničenja. Za rešavanje problema optimizacije bez ograničenja postoji mnoštvo raznovrsnih metoda. Istraživanje ovde motivisano je potrebom za metodama koje će brzo konvergirati. Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička analiza mogućnosti uvođenja parametra u gradijentne metode. Najpre se razmatra problem minimizacije konveksne funkcije više promenljivih. Problem minimizacije konveksne funkcije više promenljivih ovde se rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u postojanju problema kod kojih je funkcija cilja rezultat simulacija. Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja određenog metoda optimizacije. Takođe se predstavlja jedan novi hibridni metod konjugovanog gradijenta, kod koga je parametar konjugovanog gradijenta konveksna kombinacija dva poznata parametra konjugovanog gradijenta. U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za praćenje preostalih glava. U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i drugog reda. Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata vezanih za metode konjugovanih gradijenata. Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se originalni doprinos predstavlja u trećoj, petoj i šestoj glavi. U trećoj glavi se opisuje izvesna modifikacija određenog metoda u kome se koristi multiplikativni parametar, izabran na slučajan način. Dokazuje se linearna konvergencija tako formiranog novog metoda. Peta glava sadrži originalne rezultate koji se odnose na metode konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi hibridni metod konjugovanih gradijenata, koji je konveksna kombinacija dva poznata metoda konjugovanih gradijenata. U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih na izvesnom skupu test funkcija, koji se odnose na metode iz treće i pete glave. Implementacija svih razmatranih algoritama rađena je u paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada centralne procesorske jedinice.6The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems. The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in Chapter 6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU time.The problem under consideration is an unconstrained optimization problem. There are many different methods made in aim to solve the optimization problems. The investigation made here is motivated by the fact that the methods which converge fast are necessary. The main goal is the systematization of some known results and also theoretical and numerical analysis of the possibilities to int roduce some parameters within gradient methods. Firstly, the minimization problem is considered, where the objective function is a convex, multivar iable function. This problem is solved here without the calculation of Hessian, and such solution is very important, for example, when the big dimension systems are solved, and also for solving optimization problems with unknown values of the objective function and its gradient. Partially, this investigation is motivated by the existence of problems where the objective function is the result of simulations. Numerical results, presented in Chapter 6, show that the introduction of a parameter is useful, i.e., such introduction results by the acceleration of the known optimization method. Further, one new hybrid conjugate gradient method is presented, in which the conjugate gradient parameter is a convex combination of two known conjugate gradient parameters. In the first chapter, there is motivation and also the basic co ncepts which are necessary for the other chapters. Key Words Documentation 97 The second chapter contains the survey of some first order and second order gradient methods. The fourth chapter contains the survey of some basic concepts and results corresponding to conjugate gradient methods. The first, the second and the fourth chapters are here to help in considering of some known results, and the original results are presented in the chapters 3,5 and 6. In the third chapter, a modification of one unco nstrained optimization method is presented, in which the randomly chosen multiplicative parameter is used. Also, the linear convergence of such modification is proved. The fifth chapter contains the original results, corresponding to conjugate gradient methods. Namely, one new hybrid conjugate gradient method is presented, and this method is the convex combination of two known conjugate gradient methods. The sixth chapter consists of the numerical results, performed on a set of test functions, corresponding to methods in the chapters 3 and 5. Implementation of all considered algorithms is made in Mathematica. The comparison criterion is CPU tim
Recommended from our members
Service Improvement and Cost Reduction for Airlines: Optimal Policies for Managing Arrival and Departure Operations under Uncertainty
Annual U.S. air travel demand has been growing steadily by 4-5% over the last decade, and it is estimated that the demand will nearly double in the next twenty years. It has also been estimated by the International Civil Aviation Organization that global demand for commercial aircraft will increase at an average annual rate of 4.1% by 2034 (IATA, 2014). However, airport expansions and aviation infrastructure upgrades have not kept pace with the increase in air traffic demand, as only 3% of all the new airport projects around the world are planned in the U.S. (CAPA, 2015). Thus, the operation rates at existing airports are likely to increase significantly, implying a greater need to increase the utilization of currently available runway capacity.
With steadily increasing demand in air traffic and limited airport capacity, delay in air traffic is ubiquitous. Approximately 25% of flights experience delays of at least 15 minutes each year, resulting in significant passenger service issues and costs to airlines and society in general. Delays constitute the top service complaint for airlines, which has implications for the society as a whole - both economically and environmentally. Flight delays also increase airline costs directly, due to associated additional fuel, crew and maintenance costs. Recent studies show that the estimated cost of air transportation delay to the American economy ranges from 41 billion a year, of which, 29 million if such implementations are adapted by major airports in the U.S. Of these savings, 22 million per year if the proposed policies are implemented. I also find that the optimal metering configurations are mostly robust under different operating conditions. In addition, my results suggest that early spacing adjustments near the top of descent (TOD) are of more value for larger volumes of air traffic.
In the third and fourth problems, I study optimal departure operations at airports under the context of departure metering, which is an airport surface management procedure that limits the number of aircraft on the runway by holding aircraft at a predesigned metering area.
More specifically, in the third problem, I develop a stochastic dynamic programming framework for tactical management of pushback operations at gates and for determining the optimal number of aircraft to be directed to the runway from the metering areas. I introduce four easy-to-implement practical departure metering policies and implement a comparative analysis between these practical policies and the optimal numerical solutions. I also implement sensitivity analysis of the departure metering policies over state variable values.
In the fourth problem, I study the optimal metering area capacity at the strategic level. Building on the dynamic programming framework mentioned in the third problem, I identify the optimal metering area capacity using marginal analysis to minimize expected overall costs. Numerical simulations are implemented and potential savings are identified for sample U.S. airports based on varying capacity levels. The optimal metering area capacity is then determined based on the numerical implementations to further improve overall efficiency and sustainability of departure operations. I also analyze the benefits to airlines in terms of annual savings due to such policies, and find that the annual savings could be $31 million if the optimal departure metering policies are implemented at the top ten major airports in the U.S.
Overall, as one of the few studies on stochasticity in arrival and departure operations, I derive both tactical and strategic policies to improve efficiency and sustainability for airlines and the society, which can enhance service quality and strengthen market position for the airlines involved