409 research outputs found
A Unified Bregman Alternating Minimization Algorithm for Generalized DC Programming with Application to Imaging Data
In this paper, we consider a class of nonconvex (not necessarily
differentiable) optimization problems called generalized DC
(Difference-of-Convex functions) programming, which is minimizing the sum of
two separable DC parts and one two-block-variable coupling function. To
circumvent the nonconvexity and nonseparability of the problem under
consideration, we accordingly introduce a Unified Bregman Alternating
Minimization Algorithm (UBAMA) by maximally exploiting the favorable DC
structure of the objective. Specifically, we first follow the spirit of
alternating minimization to update each block variable in a sequential order,
which can efficiently tackle the nonseparablitity caused by the coupling
function. Then, we employ the Fenchel-Young inequality to approximate the
second DC components (i.e., concave parts) so that each subproblem reduces to a
convex optimization problem, thereby alleviating the computational burden of
the nonconvex DC parts. Moreover, each subproblem absorbs a Bregman proximal
regularization term, which is usually beneficial for inducing closed-form
solutions of subproblems for many cases via choosing appropriate Bregman kernel
functions. It is remarkable that our algorithm not only provides an algorithmic
framework to understand the iterative schemes of some novel existing
algorithms, but also enjoys implementable schemes with easier subproblems than
some state-of-the-art first-order algorithms developed for generic nonconvex
and nonsmooth optimization problems. Theoretically, we prove that the sequence
generated by our algorithm globally converges to a critical point under the
Kurdyka-{\L}ojasiewicz (K{\L}) condition. Besides, we estimate the local
convergence rates of our algorithm when we further know the prior information
of the K{\L} exponent.Comment: 44 pages, 7figures, 5 tables. Any comments are welcom
Computational Methods for Sparse Solution of Linear Inverse Problems
The goal of the sparse approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a plethora of applications
Computing Large-Scale Matrix and Tensor Decomposition with Structured Factors: A Unified Nonconvex Optimization Perspective
The proposed article aims at offering a comprehensive tutorial for the
computational aspects of structured matrix and tensor factorization. Unlike
existing tutorials that mainly focus on {\it algorithmic procedures} for a
small set of problems, e.g., nonnegativity or sparsity-constrained
factorization, we take a {\it top-down} approach: we start with general
optimization theory (e.g., inexact and accelerated block coordinate descent,
stochastic optimization, and Gauss-Newton methods) that covers a wide range of
factorization problems with diverse constraints and regularization terms of
engineering interest. Then, we go `under the hood' to showcase specific
algorithm design under these introduced principles. We pay a particular
attention to recent algorithmic developments in structured tensor and matrix
factorization (e.g., random sketching and adaptive step size based stochastic
optimization and structure-exploiting second-order algorithms), which are the
state of the art---yet much less touched upon in the literature compared to
{\it block coordinate descent} (BCD)-based methods. We expect that the article
to have an educational values in the field of structured factorization and hope
to stimulate more research in this important and exciting direction.Comment: Final Version; to appear in IEEE Signal Processing Magazine; title
revised to comply with the journal's rul
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
- …