10 research outputs found

    Slope And Geometry In Variational Mathematics

    Full text link
    Structure permeates both theory and practice in modern optimization. To make progress, optimizers often presuppose a particular algebraic description of the problem at hand, namely whether the functional components are affine, polynomial, smooth, sparse, etc., and a qualification (transversality) condition guaranteeing the components do not interact wildly. This thesis deals with structure as well, but in an intrinsic and geometric sense, independent of functional representation. On one hand, we emphasize the slope - the fastest instantaneous rate of decrease of a function - as an elegant and powerful tool to study nonsmooth phenomenon. The slope yields a verifiable condition for existence of exact error bounds - a Lipschitz-like dependence of a function's sublevel sets on its values. This relationship, in particular, will be key for the convergence analysis of the method of alternating projections and for the existence theory of steepest descent curves (appropriately defined in absence of differentiability). On the other hand, the slope and the derived concept of subdifferential may be of limited use in general due to various pathologies that may occur. For example, the subdifferential graph may be large (full-dimensional in the ambient space) or the critical value set may be dense in the image space. Such pathologies, however, rarely appear in practice. Semi-algebraic functions - those functions whose graphs are composed of finitely many sets, each defined by finitely many polynomial inequalities - nicely represent concrete functions arising in optimization and are void of such pathologies. To illustrate, we will see that semi-algebraic subdifferential graphs are, in a precise mathematical sense, small. Moreover, using the slope in tandem with semi-algebraic techniques, we significantly strengthen the convergence theory of the method of alternating projections and prove new regularity properties of steepest descent curves in the semi-algebraic setting. To illustrate, under reasonable conditions, bounded steepest descent curves of semi-algebraic functions have finite length and converge to local minimizers - properties that decisively fail in absence of semi-algebraicity. We conclude the thesis with a fresh new look at active sets in optimization from the perspective of representation independence. The underlying idea is extremely simple: around a solution of an optimization problem, an "identifiable" subset of the feasible region is one containing all nearby solutions after small perturbations to the problem. A quest for only the most essential ingredients of sensitivity analysis leads us to consider identifiable sets that are "minimal". In the context of standard nonlinear programming, this concept reduces to the active-set philosophy. On the other hand, identifiability is much broader, being independent of functional representation of the problem. This new notion lays a broad and intuitive variational-analytic foundation for optimality conditions, sensitivity, and active-set methods. In the last chapter of the thesis, we illustrate the robustness of the concept in the context of eigenvalue optimization

    Inexact reduced gradient methods in smooth nonconvex optimization

    Full text link
    This paper proposes and develops new line search methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of line search methods are reviewed and extended. A general scheme for inexact reduced gradient (IRG) methods with different stepsize selections are proposed to construct sequences of iterates with stationary accumulation points. Convergence results with convergence rates for the developed IRG methods are established under the Kurdyka-Lojasiewicz property. The conducted numerical experiments confirm the efficiency of the proposed algorithms

    Strong Metric (Sub)regularity of KKT Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization

    Full text link
    This work concerns the local convergence theory of Newton and quasi-Newton methods for convex-composite optimization: minimize f(x):=h(c(x)), where h is an infinite-valued proper convex function and c is C^2-smooth. We focus on the case where h is infinite-valued piecewise linear-quadratic and convex. Such problems include nonlinear programming, mini-max optimization, estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Our approach embeds the optimality conditions for convex-composite optimization problems into a generalized equation. We establish conditions for strong metric subregularity and strong metric regularity of the corresponding set-valued mappings. This allows us to extend classical convergence of Newton and quasi-Newton methods to the broader class of non-finite valued piecewise linear-quadratic convex-composite optimization problems. In particular we establish local quadratic convergence of the Newton method under conditions that parallel those in nonlinear programming when h is non-finite valued piecewise linear

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore