138 research outputs found

    Primal-dual interior point multigrid method for topology optimization

    Get PDF
    An interior point method for the structural topology optimization is proposed. The linear systems arising in the method are solved by the conjugate gradient method preconditioned by geometric multigrid. The resulting method is then compared with the so-called optimality condition method, an established technique in topology optimization. This method is also equipped with the multigrid preconditioned conjugate gradient algorithm. We conclude that, for large scale problems, the interior point method with an inexact iterative linear solver is superior to any other variant studied in the paper

    Multigrid methods in convex optimization with application to structural design

    Get PDF
    This dissertation has investigated the use of multigrid methods in certain classes of optimization problems, with emphasis on structural, namely topology optimization. We have investigated the solution bound constrained optimization problems arising in discretization by the finite element method, such as elliptic variational inequalities. For these problems we have proposed a "direct" multi grid approach which is a generalization of existing multigrid methods for variational inequalities. We have proposed a nonlinear first order method as a smoother that reduces memory requirements and improves the efficiency of the resulting algorithm compared to the second order method (Newton's methods), as documented on several numerical examples. The project further investigates the use of multigrid techniques in topology optimization. Topology optimization is a very practical and efficient tool for the design of lightweight structures and has many applications, among others in automotive and aircraft industry. The project studies the employment of multigrid methods in the solution of very large linear systems with sparse symmetric positive definite matrices arising in interior point methods where, traditionally, direct techniques are used. The proposed multigrid approach proves to be more efficient than that with the direct solvers. In particular, it exhibits linear dependency of the computational effort on the problem size

    Efficient Trust Region Methods for Nonconvex Optimization

    Get PDF
    For decades, a great deal of nonlinear optimization research has focused on modeling and solving convex problems. This has been due to the fact that convex objects typically represent satisfactory estimates of real-world phenomenon, and since convex objects have very nice mathematical properties that makes analyses of them relatively straightforward. However, this focus has been changing. In various important applications, such as large-scale data fitting and learning problems, researchers are starting to turn away from simple, convex models toward more challenging nonconvex models that better represent real-world behaviors and can offer more useful solutions.To contribute to this new focus on nonconvex optimization models, we discuss and present new techniques for solving nonconvex optimization problems that possess attractive theoretical and practical properties. First, we propose a trust region algorithm that, in the worst case, is able to drive the norm of the gradient of the objective function below a prescribed threshold of ϵ∈(0,∞)\epsilon \in (0,\infty) after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations, function evaluations, and derivative evaluations. This improves upon the O(ϵ−2)\mathcal{O}(\epsilon^{-2}) bound known to hold for some other trust region algorithms and matches the O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the ARC algorithm. Our algorithm, entitled TRACE, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations the algorithm requires to obtain an approximate second-order stationary point.The aforementioned algorithm is based on techniques that require an exact subproblem solution in every iteration. This is a reasonable assumption for small- to medium-scale problems, but is intractable for large-scale optimization. To address this issue, the second project of this thesis involves a proposal of a general \emph{inexact} framework, which contains a wide range of algorithms with optimal complexity bounds, through defining a novel primal-dual subproblem and a set of loose conditions for an inexact solution of it. The proposed framework enjoys the same worst-case iteration complexity bounds for locating approximate first- and second-order stationary points as \RACE. However, it does not require one to solve subproblems exactly. In addition, the framework allows one to use inexact Newton steps whenever possible, a feature which allows the algorithm to use Hessian matrix-free approaches such as the \emph{conjugate gradient} method. This improves the practical performance of the algorithm, as our numerical experiments show.We close by proposing a globally convergent trust funnel algorithm for equality constrained optimization. The proposed algorithm, under some standard assumptions, is able to find a relative first-order stationary point after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations. This matches the complexity bound of the recently proposed Short-Step ARC algorithm. Our proposed algorithm uses the step decomposition and feasibility control mechanism of a trust funnel algorithm, but incorporates ideas from our TRACE framework in order to achieve good complexity bounds

    Singular Value Computation and Subspace Clustering

    Get PDF
    In this dissertation we discuss two problems. In the first part, we consider the problem of computing a few extreme eigenvalues of a symmetric definite generalized eigenvalue problem or a few extreme singular values of a large and sparse matrix. The standard method of choice of computing a few extreme eigenvalues of a large symmetric matrix is the Lanczos or the implicitly restarted Lanczos method. These methods usually employ a shift-and-invert transformation to accelerate the speed of convergence, which is not practical for truly large problems. With this in mind, Golub and Ye proposes an inverse-free preconditioned Krylov subspace method, which uses preconditioning instead of shift-and-invert to accelerate the convergence. To compute several eigenvalues, Wielandt is used in a straightforward manner. However, the Wielandt deflation alters the structure of the problem and may cause some difficulties in certain applications such as the singular value computations. So we first propose to consider a deflation by restriction method for the inverse-free Krylov subspace method. We generalize the original convergence theory for the inverse-free preconditioned Krylov subspace method to justify this deflation scheme. We next extend the inverse-free Krylov subspace method with deflation by restriction to the singular value problem. We consider preconditioning based on robust incomplete factorization to accelerate the convergence. Numerical examples are provided to demonstrate efficiency and robustness of the new algorithm. In the second part of this thesis, we consider the so-called subspace clustering problem, which aims for extracting a multi-subspace structure from a collection of points lying in a high-dimensional space. Recently, methods based on self expressiveness property (SEP) such as Sparse Subspace Clustering and Low Rank Representations have been shown to enjoy superior performances than other methods. However, methods with SEP may result in representations that are not amenable to clustering through graph partitioning. We propose a method where the points are expressed in terms of an orthonormal basis. The orthonormal basis is optimally chosen in the sense that the representation of all points is sparsest. Numerical results are given to illustrate the effectiveness and efficiency of this method

    Rigorous optimization recipes for sparse and low rank inverse problems with applications in data sciences

    Get PDF
    Many natural and man-made signals can be described as having a few degrees of freedom relative to their size due to natural parameterizations or constraints; examples include bandlimited signals, collections of signals observed from multiple viewpoints in a network-of-sensors, and per-flow traffic measurements of the Internet. Low-dimensional models (LDMs) mathematically capture the inherent structure of such signals via combinatorial and geometric data models, such as sparsity, unions-of-subspaces, low-rankness, manifolds, and mixtures of factor analyzers, and are emerging to revolutionize the way we treat inverse problems (e.g., signal recovery, parameter estimation, or structure learning) from dimensionality-reduced or incomplete data. Assuming our problem resides in a LDM space, in this thesis we investigate how to integrate such models in convex and non-convex optimization algorithms for significant gains in computational complexity. We mostly focus on two LDMs: (i)(i) sparsity and (ii)(ii) low-rankness. We study trade-offs and their implications to develop efficient and provable optimization algorithms, and--more importantly--to exploit convex and combinatorial optimization that can enable cross-pollination of decades of research in both

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    A General Framework of Large-Scale Convex Optimization Using Jensen Surrogates and Acceleration Techniques

    Get PDF
    In a world where data rates are growing faster than computing power, algorithmic acceleration based on developments in mathematical optimization plays a crucial role in narrowing the gap between the two. As the scale of optimization problems in many fields is getting larger, we need faster optimization methods that not only work well in theory, but also work well in practice by exploiting underlying state-of-the-art computing technology. In this document, we introduce a unified framework of large-scale convex optimization using Jensen surrogates, an iterative optimization method that has been used in different fields since the 1970s. After this general treatment, we present non-asymptotic convergence analysis of this family of methods and the motivation behind developing accelerated variants. Moreover, we discuss widely used acceleration techniques for convex optimization and then investigate acceleration techniques that can be used within the Jensen surrogate framework while proposing several novel acceleration methods. Furthermore, we show that proposed methods perform competitively with or better than state-of-the-art algorithms for several applications including Sparse Linear Regression (Image Deblurring), Positron Emission Tomography, X-Ray Transmission Tomography, Logistic Regression, Sparse Logistic Regression and Automatic Relevance Determination for X-Ray Transmission Tomography
    • …
    corecore