3,609 research outputs found
H\"older Error Bounds and H\"older Calmness with Applications to Convex Semi-Infinite Optimization
Using techniques of variational analysis, necessary and sufficient
subdifferential conditions for H\"older error bounds are investigated and some
new estimates for the corresponding modulus are obtained. As an application, we
consider the setting of convex semi-infinite optimization and give a
characterization of the H\"older calmness of the argmin mapping in terms of the
level set mapping (with respect to the objective function) and a special
supremum function. We also estimate the H\"older calmness modulus of the argmin
mapping in the framework of linear programming.Comment: 25 page
From error bounds to the complexity of first-order descent methods for convex functions
This paper shows that error bounds can be used as effective tools for
deriving complexity results for first-order descent methods in convex
minimization. In a first stage, this objective led us to revisit the interplay
between error bounds and the Kurdyka-\L ojasiewicz (KL) inequality. One can
show the equivalence between the two concepts for convex functions having a
moderately flat profile near the set of minimizers (as those of functions with
H\"olderian growth). A counterexample shows that the equivalence is no longer
true for extremely flat functions. This fact reveals the relevance of an
approach based on KL inequality. In a second stage, we show how KL inequalities
can in turn be employed to compute new complexity bounds for a wealth of
descent methods for convex problems. Our approach is completely original and
makes use of a one-dimensional worst-case proximal sequence in the spirit of
the famous majorant method of Kantorovich. Our result applies to a very simple
abstract scheme that covers a wide class of descent methods. As a byproduct of
our study, we also provide new results for the globalization of KL inequalities
in the convex framework.
Our main results inaugurate a simple methodology: derive an error bound,
compute the desingularizing function whenever possible, identify essential
constants in the descent method and finally compute the complexity using the
one-dimensional worst case proximal sequence. Our method is illustrated through
projection methods for feasibility problems, and through the famous iterative
shrinkage thresholding algorithm (ISTA), for which we show that the complexity
bound is of the form where the constituents of the bound only depend
on error bound constants obtained for an arbitrary least squares objective with
regularization
Error Bounds and Holder Metric Subregularity
The Holder setting of the metric subregularity property of set-valued
mappings between general metric or Banach/Asplund spaces is investigated in the
framework of the theory of error bounds for extended real-valued functions of
two variables. A classification scheme for the general Holder metric
subregularity criteria is presented. The criteria are formulated in terms of
several kinds of primal and subdifferential slopes.Comment: 32 pages. arXiv admin note: substantial text overlap with
arXiv:1405.113
Metric Regularity of the Sum of Multifunctions and Applications
In this work, we use the theory of error bounds to study metric regularity of
the sum of two multifunctions, as well as some important properties of
variational systems. We use an approach based on the metric regularity of
epigraphical multifunctions. Our results subsume some recent results by Durea
and Strugariu.Comment: Submitted to JOTA 37 page
- …