24 research outputs found
Superiorization and Perturbation Resilience of Algorithms: A Continuously Updated Bibliography
This document presents a, (mostly) chronologically ordered, bibliography of
scientific publications on the superiorization methodology and perturbation
resilience of algorithms which is compiled and continuously updated by us at:
http://math.haifa.ac.il/yair/bib-superiorization-censor.html. Since the
beginings of this topic we try to trace the work that has been published about
it since its inception. To the best of our knowledge this bibliography
represents all available publications on this topic to date, and while the URL
is continuously updated we will revise this document and bring it up to date on
arXiv approximately once a year. Abstracts of the cited works, and some links
and downloadable files of preprints or reprints are available on the above
mentioned Internet page. If you know of a related scientific work in any form
that should be included here kindly write to me on: [email protected] with
full bibliographic details, a DOI if available, and a PDF copy of the work if
possible. The Internet page was initiated on March 7, 2015, and has been last
updated on March 12, 2020.Comment: Original report: June 13, 2015 contained 41 items. First revision:
March 9, 2017 contained 64 items. Second revision: March 8, 2018 contained 76
items. Third revision: March 11, 2019 contains 90 items. Fourth revision:
March 16, 2020 contains 112 item
Bounded perturbation resilience of projected scaled gradient methods
We investigate projected scaled gradient (PSG) methods for convex
minimization problems. These methods perform a descent step along a diagonally
scaled gradient direction followed by a feasibility regaining step via
orthogonal projection onto the constraint set. This constitutes a generalized
algorithmic structure that encompasses as special cases the gradient projection
method, the projected Newton method, the projected Landweber-type methods and
the generalized Expectation-Maximization (EM)-type methods. We prove the
convergence of the PSG methods in the presence of bounded perturbations. This
resilience to bounded perturbations is relevant to the ability to apply the
recently developed superiorization methodology to PSG methods, in particular to
the EM algorithm.Comment: Computational Optimization and Applications, accepted for publicatio
Bounded perturbation resilience of extragradient-type methods and their applications
In this paper we study the bounded perturbation resilience of the
extragradient and the subgradient extragradient methods for solving variational
inequality (VI) problem in real Hilbert spaces. This is an important property
of algorithms which guarantees the convergence of the scheme under summable
errors, meaning that an inexact version of the methods can also be considered.
Moreover, once an algorithm is proved to be bounded perturbation resilience,
superiorizion can be used, and this allows flexibility in choosing the bounded
perturbations in order to obtain a superior solution, as well explained in the
paper. We also discuss some inertial extragradient methods. Under mild and
standard assumptions of monotonicity and Lipschitz continuity of the VI's
associated mapping, convergence of the perturbed extragradient and subgradient
extragradient methods is proved. In addition we show that the perturbed
algorithms converges at the rate of . Numerical illustrations are given
to demonstrate the performances of the algorithms.Comment: Accepted for publication in The Journal of Inequalities and
Applications. arXiv admin note: text overlap with arXiv:1711.01936 and text
overlap with arXiv:1507.07302 by other author
Accelerating two projection methods via perturbations with application to Intensity-Modulated Radiation Therapy
Constrained convex optimization problems arise naturally in many real-world
applications. One strategy to solve them in an approximate way is to translate
them into a sequence of convex feasibility problems via the recently developed
level set scheme and then solve each feasibility problem using projection
methods. However, if the problem is ill-conditioned, projection methods often
show zigzagging behavior and therefore converge slowly.
To address this issue, we exploit the bounded perturbation resilience of the
projection methods and introduce two new perturbations which avoid zigzagging
behavior. The first perturbation is in the spirit of -step methods and uses
gradient information from previous iterates. The second uses the approach of
surrogate constraint methods combined with relaxed, averaged projections.
We apply two different projection methods in the unperturbed version, as well
as the two perturbed versions, to linear feasibility problems along with
nonlinear optimization problems arising from intensity-modulated radiation
therapy (IMRT) treatment planning. We demonstrate that for all the considered
problems the perturbations can significantly accelerate the convergence of the
projection methods and hence the overall procedure of the level set scheme. For
the IMRT optimization problems the perturbed projection methods found an
approximate solution up to 4 times faster than the unperturbed methods while at
the same time achieving objective function values which were 0.5 to 5.1% lower.Comment: Accepted for publication in Applied Mathematics & Optimizatio
Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods
The convex feasibility problem (CFP) is at the core of the modeling of many
problems in various areas of science. Subgradient projection methods are
important tools for solving the CFP because they enable the use of subgradient
calculations instead of orthogonal projections onto the individual sets of the
problem. Working in a real Hilbert space, we show that the sequential
subgradient projection method is perturbation resilient. By this we mean that
under appropriate conditions the sequence generated by the method converges
weakly, and sometimes also strongly, to a point in the intersection of the
given subsets of the feasibility problem, despite certain perturbations which
are allowed in each iterative step. Unlike previous works on solving the convex
feasibility problem, the involved functions, which induce the feasibility
problem's subsets, need not be convex. Instead, we allow them to belong to a
wider and richer class of functions satisfying a weaker condition, that we call
"zero-convexity". This class, which is introduced and discussed here, holds a
promise to solve optimization problems in various areas, especially in
non-smooth and non-convex optimization. The relevance of this study to
approximate minimization and to the recent superiorization methodology for
constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio