166 research outputs found

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Exterior-point Optimization for Nonconvex Learning

    Full text link
    In this paper we present the nonconvex exterior-point optimization solver (NExOS) -- a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including (but not limited to) sparse and low-rank optimization problems. By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We implement NExOS in the open-source Julia package NExOS.jl, which has been extensively tested on many instances from a wide variety of learning problems. We demonstrate that our algorithm, in spite of being general purpose, outperforms specialized methods on several examples of well-known nonconvex learning problems involving sparse and low-rank optimization. For sparse regression problems, NExOS finds locally optimal solutions which dominate glmnet in terms of support recovery, yet its training loss is smaller by an order of magnitude. For low-rank optimization with real-world data, NExOS recovers solutions with 3 fold training loss reduction, but with a proportion of explained variance that is 2 times better compared to the nuclear norm heuristic.Comment: 40 pages, 6 figure

    A proximal method for composite minimization

    Get PDF
    Abstract. We consider minimization of functions that are compositions of prox-regular functions with smooth vector functions. A wide variety of important optimization problems can be formulated in this way. We describe a subproblem constructed from a linearized approximation to the objective and a regularization term, investigating the properties of local solutions of this subproblem and showing that they eventually identify a manifold containing the solution of the original problem. We propose an algorithmic framework based on this subproblem and prove a global convergence result

    Optimization Methods and Algorithms for Classes of Black-Box and Grey-Box Problems

    Get PDF
    There are many optimization problems in physics, chemistry, finance, computer science, engineering and operations research for which the analytical expressions of the objective and/or the constraints are unavailable. These are black-box problems where the derivative information are often not available or too expensive to approximate numerically. When the derivative information is absent, it becomes challenging to optimize and guarantee optimality of the solution. The objective of this Ph.D. work is to propose methods and algorithms to address some of the challenges of blackbox optimization (BBO). A top-down approach is taken by first addressing an easier class of black-box and then the difficulty and complexity of the problems is gradually increased. In the first part of the dissertation, a class of grey-box problems is considered for which the closed form of the objective and/or constraints are unknown, but it is possible to obtain a global upper bound on the diagonal Hessian elements. This allows the construction of an edge-concave underestimator with vertex polyhedral solution. This lower bounding technique is implemented within a branch-and-bound framework with guaranteed convergence to global optimality. The technique is applied for the optimization of problems with embedded system of ordinary differential equations (ODEs). Time dependent bounds on the state variables and the diagonal elements of the Hessian are computed by solving auxiliary set of ODEs that are derived using differential inequalities. In the second part of the dissertation, general box-constrained black-box problems are addressed for which only simulations can be performed. A novel optimization method, UNIPOPT (Univariate Projection-based Optimization) based on projection onto a univariate space is proposed. A special function is identified in this space that also contains the global minima of the original function. Computational experiments suggest that UNIPOPT often have better space exploration features compared to other approaches. The third part of the dissertation addresses general black-box problems with constraints of both known and unknown algebraic forms. An efficient two-phase algorithm based on trust-region framework is proposed for problems particularly involving high function evaluation cost. The performance of the approach is illustrated through computational experiments which evaluate its ability to reduce a merit function and find the optima
    corecore