23,568 research outputs found
Best Approximation from the Kuhn-Tucker Set of Composite Monotone Inclusions
Kuhn-Tucker points play a fundamental role in the analysis and the numerical
solution of monotone inclusion problems, providing in particular both primal
and dual solutions. We propose a class of strongly convergent algorithms for
constructing the best approximation to a reference point from the set of
Kuhn-Tucker points of a general Hilbertian composite monotone inclusion
problem. Applications to systems of coupled monotone inclusions are presented.
Our framework does not impose additional assumptions on the operators present
in the formulation, and it does not require knowledge of the norm of the linear
operators involved in the compositions or the inversion of linear operators
Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods
The convex feasibility problem (CFP) is at the core of the modeling of many
problems in various areas of science. Subgradient projection methods are
important tools for solving the CFP because they enable the use of subgradient
calculations instead of orthogonal projections onto the individual sets of the
problem. Working in a real Hilbert space, we show that the sequential
subgradient projection method is perturbation resilient. By this we mean that
under appropriate conditions the sequence generated by the method converges
weakly, and sometimes also strongly, to a point in the intersection of the
given subsets of the feasibility problem, despite certain perturbations which
are allowed in each iterative step. Unlike previous works on solving the convex
feasibility problem, the involved functions, which induce the feasibility
problem's subsets, need not be convex. Instead, we allow them to belong to a
wider and richer class of functions satisfying a weaker condition, that we call
"zero-convexity". This class, which is introduced and discussed here, holds a
promise to solve optimization problems in various areas, especially in
non-smooth and non-convex optimization. The relevance of this study to
approximate minimization and to the recent superiorization methodology for
constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio
- âŠ