206 research outputs found
Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems
Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness
A primal-dual flow for affine constrained convex optimization
We introduce a novel primal-dual flow for affine constrained convex
optimization problem. As a modification of the standard saddle-point system,
our primal-dual flow is proved to possesses the exponential decay property, in
terms of a tailored Lyapunov function. Then a class of primal-dual methods for
the original optimization problem are obtained from numerical discretizations
of the continuous flow, and with a unified discrete Lyapunov function,
nonergodic convergence rates are established. Among those algorithms, we can
recover the (linearized) augmented Lagrangian method and the quadratic penalty
method with continuation technique. Also, new methods with a special inner
problem, that is a linear symmetric positive definite system or a nonlinear
equation which may be solved efficiently via the semi-smooth Newton method,
have been proposed as well. Especially, numerical tests on the linearly
constrained - minimization show that our method outperforms the
accelerated linearized Bregman method
Fixed-Time Gradient Flows for Solving Constrained Optimization: A Unified Approach
The accelerated method in solving optimization problems has always been an
absorbing topic. Based on the fixed-time (FxT) stability of nonlinear dynamical
systems, we provide a unified approach for designing FxT gradient flows
(FxTGFs). First, a general class of nonlinear functions in designing FxTGFs is
provided. A unified method for designing first-order FxTGFs is shown under
PolyakL jasiewicz inequality assumption, a weaker condition than strong
convexity. When there exist both bounded and vanishing disturbances in the
gradient flow, a specific class of nonsmooth robust FxTGFs with disturbance
rejection is presented. Under the strict convexity assumption, Newton-based
FxTGFs is given and further extended to solve time-varying optimization.
Besides, the proposed FxTGFs are further used for solving equation-constrained
optimization. Moreover, an FxT proximal gradient flow with a wide range of
parameters is provided for solving nonsmooth composite optimization. To show
the effectiveness of various FxTGFs, the static regret analysis for several
typical FxTGFs are also provided in detail. Finally, the proposed FxTGFs are
applied to solve two network problems, i.e., the network consensus problem and
solving a system linear equations, respectively, from the respective of
optimization. Particularly, by choosing component-wisely sign-preserving
functions, these problems can be solved in a distributed way, which extends the
existing results. The accelerated convergence and robustness of the proposed
FxTGFs are validated in several numerical examples stemming from practical
applications
- …