145 research outputs found
The Douglas-Rachford algorithm for two (not necessarily intersecting) affine subspaces
The Douglas--Rachford algorithm is a classical and very successful splitting
method for finding the zeros of the sums of monotone operators. When the
underlying operators are normal cone operators, the algorithm solves a convex
feasibility problem. In this paper, we provide a detailed study of the
Douglas--Rachford iterates and the corresponding {shadow sequence} when the
sets are affine subspaces that do not necessarily intersect. We prove strong
convergence of the shadows to the nearest generalized solution. Our results
extend recent work from the consistent to the inconsistent case. Various
examples are provided to illustrates the results
On the order of the operators in the Douglas-Rachford algorithm
The Douglas-Rachford algorithm is a popular method for finding zeros of sums
of monotone operators. By its definition, the Douglas-Rachford operator is not
symmetric with respect to the order of the two operators. In this paper we
provide a systematic study of the two possible Douglas-Rachford operators. We
show that the reflectors of the underlying operators act as bijections between
the fixed points sets of the two Douglas-Rachford operators. Some elegant
formulae arise under additional assumptions. Various examples illustrate our
results.Comment: 10 page
The magnitude of the minimal displacement vector for compositions and convex combinations of firmly nonexpansive mappings
Maximally monotone operators and firmly nonexpansive mappings play key roles
in modern optimization and nonlinear analysis. Five years ago, it was shown
that if finitely many firmly nonexpansive operators are all asymptotically
regular (i.e., the have or "almost have" fixed points), then the same is true
for compositions and convex combinations. In this paper, we derive bounds on
the magnitude of the minimal displacement vectors of compositions and of convex
combinations in terms of the displacement vectors of the underlying operators.
Our results completely generalize earlier works. Moreover, we present various
examples illustrating that our bounds are sharp
On the Douglas-Rachford algorithm
The Douglas-Rachford algorithm is a very popular splitting technique for
finding a zero of the sum of two maximally monotone operators. However, the
behaviour of the algorithm remains mysterious in the general inconsistent case,
i.e., when the sum problem has no zeros. More than a decade ago, however, it
was shown that in the (possibly inconsistent) convex feasibility setting, the
shadow sequence remains bounded and it is weak cluster points solve a best
approximation problem.
In this paper, we advance the understanding of the inconsistent case
significantly by providing a complete proof of the full weak convergence in the
convex feasibility setting. In fact, a more general sufficient condition for
the weak convergence in the general case is presented. Several examples
illustrate the results
Affine nonexpansive operators, Attouch-Th\'era duality and the Douglas-Rachford algorithm
The Douglas-Rachford splitting algorithm was originally proposed in 1956 to
solve a system of linear equations arising from the discretization of a partial
differential equation. In 1979, Lions and Mercier brought forward a very
powerful extension of this method suitable to solve optimization problems.
In this paper, we revisit the original affine setting. We provide a powerful
convergence result for finding a zero of the sum of two maximally monotone
affine relations. As a by product of our analysis, we obtain results concerning
the convergence of iterates of affine nonexpansive mappings as well as
Attouch-Th\'era duality. Numerous examples are presented
On Fej\'er monotone sequences and nonexpansive mappings
The notion of Fej\'er monotonicity has proven to be a fruitful concept in
fixed point theory and optimization. In this paper, we present new conditions
sufficient for convergence of Fej\'er monotone sequences and we also provide
applications to the study of nonexpansive mappings. Various examples illustrate
our results
Generalized monotone operators and their averaged resolvents
The correspondence between the monotonicity of a (possibly) set-valued
operator and the firm nonexpansiveness of its resolvent is a key ingredient in
the convergence analysis of many optimization algorithms. Firmly nonexpansive
operators form a proper subclass of the more general - but still pleasant from
an algorithmic perspective - class of averaged operators. In this paper, we
introduce the new notion of conically nonexpansive operators which generalize
nonexpansive mappings. We characterize averaged operators as being resolvents
of comonotone operators under appropriate scaling. As a consequence, we
characterize the proximal point mappings associated with hypoconvex functions
as cocoercive operators, or equivalently; as displacement mappings of conically
nonexpansive operators. Several examples illustrate our analysis and
demonstrate tightness of our results
Maximally monotone operators with ranges whose closures are not convex and an answer to a recent question by Stephen Simons
In his recent Proceedings of the AMS paper "Gossez's skew linear map and its
pathological maximally monotone multifunctions", Stephen Simons proved that the
closure of the range of the sum of the Gossez operator and a multiple of the
duality map is nonconvex whenever the scalar is between 0 and 4. The problem of
the convexity of that range when the scalar is equal to 4 was explicitly
stated. In this paper, we answer this question in the negative for any scalar
greater than or equal to 4. We derive this result from an abstract framework
that allows us to also obtain a corresponding result for the Fitzpatrick-Phelps
integral operator
On a result of Pazy concerning the asymptotic behaviour of nonexpansive mappings
In 1971, Pazy presented a beautiful trichotomy result concerning the
asymptotic behaviour of the iterates of a nonexpansive mapping. In this note,
we analyze the fixed-point free case in more detail. Our results and examples
give credence to the conjecture that the iterates always converge cosmically.Comment: 10 page
A Derivative-Free CoMirror Algorithm
We consider where is a compact convex
subset of \RR^m, and and are continuous convex functions defined on
an open neighbourhood of . We work in the setting of derivative-free
optimization, assuming that and are available through a black-box that
provides only function values for a lower- representation of the
functions. We present a derivative-free optimization variant of the
\eps-comirror algorithm \cite{BBTGBT2010}. Algorithmic convergence hinges on
the ability to accurately approximate subgradients of lower-
functions, which we prove is possible through linear interpolation. We provide
convergence analysis that quantifies the difference between the function values
of the iterates and the optimal function value. We find that the DFO algorithm
we develop has the same convergence result as the original gradient-based
algorithm. We present some numerical testing that demonstrate the practical
feasibility of the algorithm, and conclude with some directions for further
research
- …