436 research outputs found
Characterizations of Super-regularity and its Variants
Convergence of projection-based methods for nonconvex set feasibility
problems has been established for sets with ever weaker regularity assumptions.
What has not kept pace with these developments is analogous results for
convergence of optimization problems with correspondingly weak assumptions on
the value functions. Indeed, one of the earliest classes of nonconvex sets for
which convergence results were obtainable, the class of so-called super-regular
sets introduced by Lewis, Luke and Malick (2009), has no functional
counterpart. In this work, we amend this gap in the theory by establishing the
equivalence between a property slightly stronger than super-regularity, which
we call Clarke super-regularity, and subsmootheness of sets as introduced by
Aussel, Daniilidis and Thibault (2004). The bridge to functions shows that
approximately convex functions studied by Ngai, Luc and Th\'era (2000) are
those which have Clarke super-regular epigraphs. Further classes of regularity
of functions based on the corresponding regularity of their epigraph are also
discussed.Comment: 15 pages, 2 figure
Activity Identification and Local Linear Convergence of Douglas--Rachford/ADMM under Partial Smoothness
Convex optimization has become ubiquitous in most quantitative disciplines of
science, including variational image processing. Proximal splitting algorithms
are becoming popular to solve such structured convex optimization problems.
Within this class of algorithms, Douglas--Rachford (DR) and alternating
direction method of multipliers (ADMM) are designed to minimize the sum of two
proper lower semi-continuous convex functions whose proximity operators are
easy to compute. The goal of this work is to understand the local convergence
behaviour of DR (resp. ADMM) when the involved functions (resp. their
Legendre-Fenchel conjugates) are moreover partly smooth. More precisely, when
both of the two functions (resp. their conjugates) are partly smooth relative
to their respective manifolds, we show that DR (resp. ADMM) identifies these
manifolds in finite time. Moreover, when these manifolds are affine or linear,
we prove that DR/ADMM is locally linearly convergent. When and are
locally polyhedral, we show that the optimal convergence radius is given in
terms of the cosine of the Friedrichs angle between the tangent spaces of the
identified manifolds. This is illustrated by several concrete examples and
supported by numerical experiments.Comment: 17 pages, 1 figure, published in the proceedings of the Fifth
International Conference on Scale Space and Variational Methods in Computer
Visio
Convergence and Perturbation Resilience of Dynamic String-Averaging Projection Methods
We consider the convex feasibility problem (CFP) in Hilbert space and
concentrate on the study of string-averaging projection (SAP) methods for the
CFP, analyzing their convergence and their perturbation resilience. In the
past, SAP methods were formulated with a single predetermined set of strings
and a single predetermined set of weights. Here we extend the scope of the
family of SAP methods to allow iteration-index-dependent variable strings and
weights and term such methods dynamic string-averaging projection (DSAP)
methods. The bounded perturbation resilience of DSAP methods is relevant and
important for their possible use in the framework of the recently developed
superiorization heuristic methodology for constrained minimization problems.Comment: Computational Optimization and Applications, accepted for publicatio
Linear Superiorization for Infeasible Linear Programming
Linear superiorization (abbreviated: LinSup) considers linear programming
(LP) problems wherein the constraints as well as the objective function are
linear. It allows to steer the iterates of a feasibility-seeking iterative
process toward feasible points that have lower (not necessarily minimal) values
of the objective function than points that would have been reached by the same
feasiblity-seeking iterative process without superiorization. Using a
feasibility-seeking iterative process that converges even if the linear
feasible set is empty, LinSup generates an iterative sequence that converges to
a point that minimizes a proximity function which measures the linear
constraints violation. In addition, due to LinSup's repeated objective function
reduction steps such a point will most probably have a reduced objective
function value. We present an exploratory experimental result that illustrates
the behavior of LinSup on an infeasible LP problem.Comment: arXiv admin note: substantial text overlap with arXiv:1612.0653
Model and Feature Selection in Hidden Conditional Random Fields with Group Regularization
Proceedings of: 8th International Conference on Hybrid Artificial Intelligence Systems (HAIS 2013). Salamanca, September 11-13, 2013.Sequence classification is an important problem in computer vision, speech analysis or computational biology. This paper presents a new training strategy for the Hidden Conditional Random Field sequence classifier incorporating model and feature selection. The standard Lasso regularization employed in the estimation of model parameters is replaced by overlapping group-L1 regularization. Depending on the configuration of the overlapping groups, model selection, feature selection,or both are performed. The sequence classifiers trained in this way have better predictive performance. The application of the proposed method in a human action recognition task confirms that fact.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485)Publicad
Improved success rate and stability for phase retrieval by including randomized overrelaxation in the hybrid input output algorithm
In this paper, we study the success rate of the reconstruction of objects of
finite extent given the magnitude of its Fourier transform and its geometrical
shape. We demonstrate that the commonly used combination of the hybrid input
output and error reduction algorithm is significantly outperformed by an
extension of this algorithm based on randomized overrelaxation. In most cases,
this extension tremendously enhances the success rate of reconstructions for a
fixed number of iterations as compared to reconstructions solely based on the
traditional algorithm. The good scaling properties in terms of computational
time and memory requirements of the original algorithm are not influenced by
this extension.Comment: 14 pages, 8 figure
An additive subfamily of enlargements of a maximally monotone operator
We introduce a subfamily of additive enlargements of a maximally monotone
operator. Our definition is inspired by the early work of Simon Fitzpatrick.
These enlargements constitute a subfamily of the family of enlargements
introduced by Svaiter. When the operator under consideration is the
subdifferential of a convex lower semicontinuous proper function, we prove that
some members of the subfamily are smaller than the classical
-subdifferential enlargement widely used in convex analysis. We also
recover the epsilon-subdifferential within the subfamily. Since they are all
additive, the enlargements in our subfamily can be seen as structurally closer
to the -subdifferential enlargement
- …