8,986 research outputs found
Lipschitz Behavior of Solutions to Convex Minimization Problems
We derive the Lipschitz dependence of the set of solutions of a convex minimization problem and its Lagrange multipliers upon the natural parameters from an Inverse Function Theorem for set-valued maps. This requires the use of contingent and Clarke derivatives of set-valued maps, as well as generalized second derivatives of convex functions
Nonsmooth Analysis
This survey of nonsmooth analysis sets out to prove an inverse function theorem for set-valued maps. The inverse function theorem for the more usual smooth maps plays a very important role in the solution of many problems in pure and applied analysis, and we can expect such an adaptation of this theorem also to be of great value. For example, it can be used to solve convex minimization problems and to prove the Lipschitz behavior of its solutions when the natural parameters vary--a very important problem in marginal theory in economics
Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
We consider the problem of minimizing a convex function over the intersection
of finitely many simple sets which are easy to project onto. This is an
important problem arising in various domains such as machine learning. The main
difficulty lies in finding the projection of a point in the intersection of
many sets. Existing approaches yield an infeasible point with an
iteration-complexity of for nonsmooth problems with no
guarantees on the in-feasibility. By reformulating the problem through exact
penalty functions, we derive first-order algorithms which not only guarantees
that the distance to the intersection is small but also improve the complexity
to and for smooth functions. For
composite and smooth problems, this is achieved through a saddle-point
reformulation where the proximal operators required by the primal-dual
algorithms can be computed in closed form. We illustrate the benefits of our
approach on a graph transduction problem and on graph matching
MM Algorithms for Minimizing Nonsmoothly Penalized Objective Functions
In this paper, we propose a general class of algorithms for optimizing an
extensive variety of nonsmoothly penalized objective functions that satisfy
certain regularity conditions. The proposed framework utilizes the
majorization-minimization (MM) algorithm as its core optimization engine. The
resulting algorithms rely on iterated soft-thresholding, implemented
componentwise, allowing for fast, stable updating that avoids the need for any
high-dimensional matrix inversion. We establish a local convergence theory for
this class of algorithms under weaker assumptions than previously considered in
the statistical literature. We also demonstrate the exceptional effectiveness
of new acceleration methods, originally proposed for the EM algorithm, in this
class of problems. Simulation results and a microarray data example are
provided to demonstrate the algorithm's capabilities and versatility.Comment: A revised version of this paper has been published in the Electronic
Journal of Statistic
Partial Sum Minimization of Singular Values in Robust PCA: Algorithm and Applications
Robust Principal Component Analysis (RPCA) via rank minimization is a
powerful tool for recovering underlying low-rank structure of clean data
corrupted with sparse noise/outliers. In many low-level vision problems, not
only it is known that the underlying structure of clean data is low-rank, but
the exact rank of clean data is also known. Yet, when applying conventional
rank minimization for those problems, the objective function is formulated in a
way that does not fully utilize a priori target rank information about the
problems. This observation motivates us to investigate whether there is a
better alternative solution when using rank minimization. In this paper,
instead of minimizing the nuclear norm, we propose to minimize the partial sum
of singular values, which implicitly encourages the target rank constraint. Our
experimental analyses show that, when the number of samples is deficient, our
approach leads to a higher success rate than conventional rank minimization,
while the solutions obtained by the two approaches are almost identical when
the number of samples is more than sufficient. We apply our approach to various
low-level vision problems, e.g. high dynamic range imaging, motion edge
detection, photometric stereo, image alignment and recovery, and show that our
results outperform those obtained by the conventional nuclear norm rank
minimization method.Comment: Accepted in Transactions on Pattern Analysis and Machine Intelligence
(TPAMI). To appea
- …