31 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    On polynomial Code Generation

    Get PDF
    International audienceIn static analysis, one often has to deal with polynomials in the program control variables, either native to the source code or created by enabling analyses. We have explained elsewhere how to compute dependences in such situations and use them for building polynomial schedules. It remains to explain how to generate polynomial code. The present proposal is to target new parallel programming languages of the async/finish family, like X10 or Habanero, which are "polynomial friendly" and for which efficient compilers exists. Both these languages have barrier-like constructs-clocks for X10 and phasers for Habanero-which may be used to synchronize activities. To understand the behaviour of a clocked program, one has to count the number of clock advance operations since the creation of each activity. Advances with equal counts are synchronized , and these counts may be polynomials. The trick is therefore to insure that before executing an operation, its activity has executed as many advances as the current value of its schedule. This can be obtained by inserting auxilliary loops for executing the necessary advances. This scheme fails if the schedule is not monotone increasing with respect to the execution order in each activity. This problem may be solved by reordering the activities-which is possible since the real execution order is given by the schedule-or in extreme cases by index set splitting

    Acceleration Methods

    Full text link
    This monograph covers some recent advances in a range of acceleration techniques frequently used in convex optimization. We first use quadratic optimization problems to introduce two key families of methods, namely momentum and nested optimization schemes. They coincide in the quadratic case to form the Chebyshev method. We discuss momentum methods in detail, starting with the seminal work of Nesterov and structure convergence proofs using a few master templates, such as that for optimized gradient methods, which provide the key benefit of showing how momentum methods optimize convergence guarantees. We further cover proximal acceleration, at the heart of the Catalyst and Accelerated Hybrid Proximal Extragradient frameworks, using similar algorithmic patterns. Common acceleration techniques rely directly on the knowledge of some of the regularity parameters in the problem at hand. We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates while adapting to unobserved regularity parameters.Comment: Published in Foundation and Trends in Optimization (see https://www.nowpublishers.com/article/Details/OPT-036
    corecore