98,350 research outputs found
Lagrange optimality system for a class of nonsmooth convex optimization
In this paper, we revisit the augmented Lagrangian method for a class of
nonsmooth convex optimization. We present the Lagrange optimality system of the
augmented Lagrangian associated with the problems, and establish its
connections with the standard optimality condition and the saddle point
condition of the augmented Lagrangian, which provides a powerful tool for
developing numerical algorithms. We apply a linear Newton method to the
Lagrange optimality system to obtain a novel algorithm applicable to a variety
of nonsmooth convex optimization problems arising in practical applications.
Under suitable conditions, we prove the nonsingularity of the Newton system and
the local convergence of the algorithm.Comment: 19 page
Augmented Lagrangian Functions for Cone Constrained Optimization: the Existence of Global Saddle Points and Exact Penalty Property
In the article we present a general theory of augmented Lagrangian functions
for cone constrained optimization problems that allows one to study almost all
known augmented Lagrangians for cone constrained programs within a unified
framework. We develop a new general method for proving the existence of global
saddle points of augmented Lagrangian functions, called the localization
principle. The localization principle unifies, generalizes and sharpens most of
the known results on existence of global saddle points, and, in essence,
reduces the problem of the existence of saddle points to a local analysis of
optimality conditions. With the use of the localization principle we obtain
first necessary and sufficient conditions for the existence of a global saddle
point of an augmented Lagrangian for cone constrained minimax problems via both
second and first order optimality conditions. In the second part of the paper,
we present a general approach to the construction of globally exact augmented
Lagrangian functions. The general approach developed in this paper allowed us
not only to sharpen most of the existing results on globally exact augmented
Lagrangians, but also to construct first globally exact augmented Lagrangian
functions for equality constrained optimization problems, for nonlinear second
order cone programs and for nonlinear semidefinite programs. These globally
exact augmented Lagrangians can be utilized in order to design new
superlinearly (or even quadratically) convergent optimization methods for cone
constrained optimization problems.Comment: This is a preprint of an article published by Springer in Journal of
Global Optimization (2018). The final authenticated version is available
online at: http://dx.doi.org/10.1007/s10898-017-0603-
Universal Compressed Sensing
In this paper, the problem of developing universal algorithms for compressed
sensing of stochastic processes is studied. First, R\'enyi's notion of
information dimension (ID) is generalized to analog stationary processes. This
provides a measure of complexity for such processes and is connected to the
number of measurements required for their accurate recovery. Then a minimum
entropy pursuit (MEP) optimization approach is proposed, and it is proven that
it can reliably recover any stationary process satisfying some mixing
constraints from sufficient number of randomized linear measurements, without
having any prior information about the distribution of the process. It is
proved that a Lagrangian-type approximation of the MEP optimization problem,
referred to as Lagrangian-MEP problem, is identical to a heuristic
implementable algorithm proposed by Baron et al. It is shown that for the right
choice of parameters the Lagrangian-MEP algorithm, in addition to having the
same asymptotic performance as MEP optimization, is also robust to the
measurement noise. For memoryless sources with a discrete-continuous mixture
distribution, the fundamental limits of the minimum number of required
measurements by a non-universal compressed sensing decoder is characterized by
Wu et al. For such sources, it is proved that there is no loss in universal
coding, and both the MEP and the Lagrangian-MEP asymptotically achieve the
optimal performance
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication
We study distributed optimization in networked systems, where nodes cooperate
to find the optimal quantity of common interest, x=x^\star. The objective
function of the corresponding optimization problem is the sum of private (known
only by a node,) convex, nodes' objectives and each node imposes a private
convex constraint on the allowed values of x. We solve this problem for generic
connected network topologies with asymmetric random link failures with a novel
distributed, decentralized algorithm. We refer to this algorithm as AL-G
(augmented Lagrangian gossiping,) and to its variants as AL-MG (augmented
Lagrangian multi neighbor gossiping) and AL-BG (augmented Lagrangian broadcast
gossiping.) The AL-G algorithm is based on the augmented Lagrangian dual
function. Dual variables are updated by the standard method of multipliers, at
a slow time scale. To update the primal variables, we propose a novel,
Gauss-Seidel type, randomized algorithm, at a fast time scale. AL-G uses
unidirectional gossip communication, only between immediate neighbors in the
network and is resilient to random link failures. For networks with reliable
communication (i.e., no failures,) the simplified, AL-BG (augmented Lagrangian
broadcast gossiping) algorithm reduces communication, computation and data
storage cost. We prove convergence for all proposed algorithms and demonstrate
by simulations the effectiveness on two applications: l_1-regularized logistic
regression for classification and cooperative spectrum sensing for cognitive
radio networks.Comment: 28 pages, journal; revise
Constrained Deep Networks: Lagrangian Optimization via Log-Barrier Extensions
This study investigates the optimization aspects of imposing hard inequality
constraints on the outputs of CNNs. In the context of deep networks,
constraints are commonly handled with penalties for their simplicity, and
despite their well-known limitations. Lagrangian-dual optimization has been
largely avoided, except for a few recent works, mainly due to the computational
complexity and stability/convergence issues caused by alternating explicit dual
updates/projections and stochastic optimization. Several studies showed that,
surprisingly for deep CNNs, the theoretical and practical advantages of
Lagrangian optimization over penalties do not materialize in practice. We
propose log-barrier extensions, which approximate Lagrangian optimization of
constrained-CNN problems with a sequence of unconstrained losses. Unlike
standard interior-point and log-barrier methods, our formulation does not need
an initial feasible solution. Furthermore, we provide a new technical result,
which shows that the proposed extensions yield an upper bound on the duality
gap. This generalizes the duality-gap result of standard log-barriers, yielding
sub-optimality certificates for feasible solutions. While sub-optimality is not
guaranteed for non-convex problems, our result shows that log-barrier
extensions are a principled way to approximate Lagrangian optimization for
constrained CNNs via implicit dual variables. We report comprehensive weakly
supervised segmentation experiments, with various constraints, showing that our
formulation outperforms substantially the existing constrained-CNN methods,
both in terms of accuracy, constraint satisfaction and training stability, more
so when dealing with a large number of constraints
On Lagrangian Duality in Vector Optimization. Applications to the linear case.
The paper deals with vector constrained extremum problems. A separation scheme is recalled; starting from it, a vector Lagrangian duality theory is developed. The linear duality due to Isermann can be embedded in this separation approach. Some classical applications are extended to the multiobjective framework in the linear case, exploiting the duality theory of Isermann.Vector Optimization, Separation, Image Space Analysis, Lagrangian Duality, Set-Valued Function.
- …
