388 research outputs found

    On Some Properties of a Class of Eventually Locally Mixed Cyclic/Acyclic Multivalued Self-Mappings with Application Examples

    Get PDF
    In this paper, a multivalued self-mapping is defined on the union of a finite number of subsets p(≥2) of a metric space which is, in general, of a mixed cyclic and acyclic nature in the sense that it can perform some iterations within each of the subsets before executing a switching action to its right adjacent one when generating orbits. The self-mapping can have combinations of locally contractive, non-contractive/non-expansive and locally expansive properties for some of the switching between different pairs of adjacent subsets. The properties of the asymptotic boundedness of the distances associated with the elements of the orbits are achieved under certain conditions of the global dominance of the contractivity of groups of consecutive iterations of the self-mapping, with each of those groups being of non-necessarily fixed size. If the metric space is a uniformly convex Banach one and the subsets are closed and convex, then some particular results on the convergence of the sequences of iterates to the best proximity points of the adjacent subsets are obtained in the absence of eventual local expansivity for switches between all the pairs of adjacent subsets. An application of the stabilization of a discrete dynamic system subject to impulsive effects in its dynamics due to finite discontinuity jumps in its state is also discussed.Basque Government, Grant IT1555-22

    A convergent relaxation of the Douglas-Rachford algorithm

    Full text link
    This paper proposes an algorithm for solving structured optimization problems, which covers both the backward-backward and the Douglas-Rachford algorithms as special cases, and analyzes its convergence. The set of fixed points of the algorithm is characterized in several cases. Convergence criteria of the algorithm in terms of general fixed point operators are established. When applying to nonconvex feasibility including the inconsistent case, we prove local linear convergence results under mild assumptions on regularity of individual sets and of the collection of sets which need not intersect. In this special case, we refine known linear convergence criteria for the Douglas-Rachford algorithm (DR). As a consequence, for feasibility with one of the sets being affine, we establish criteria for linear and sublinear convergence of convex combinations of the alternating projection and the DR methods. These results seem to be new. We also demonstrate the seemingly improved numerical performance of this algorithm compared to the RAAR algorithm for both consistent and inconsistent sparse feasibility problems

    Convergence in Distribution of Randomized Algorithms: The Case of Partially Separable Optimization

    Full text link
    We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.Comment: 25 pages, 43 reference
    • …
    corecore