628 research outputs found

    A Class of Randomized Primal-Dual Algorithms for Distributed Optimization

    Get PDF
    Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in [Combettes,Pesquet,2014], several variants of block-coordinate primal-dual algorithms are designed in order to solve a wide array of monotone inclusion problems. These methods rely on a sweep of blocks of variables which are activated at each iteration according to a random rule, and they allow stochastic errors in the evaluation of the involved operators. Then, this framework is employed to derive block-coordinate primal-dual proximal algorithms for solving composite convex variational problems. The resulting algorithm implementations may be useful for reducing computational complexity and memory requirements. Furthermore, we show that the proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context

    Distributed Optimization and Control using Operator Splitting Methods

    Get PDF
    The significant progress that has been made in recent years both in hardware implementations and in numerical computing has rendered real-time optimization-based control a viable option when it comes to advanced industrial applications. At the same time, the field of big data has emerged, seeking solutions to problems that classical optimization algorithms are incapable of providing. Though for different reasons, both application areas triggered interest in revisiting the family of optimization algorithms commonly known as decomposition schemes or operator splitting methods. This lately revived interest in these methods can be mainly attributed to two characteristics: Com- putationally low per-iteration cost along with small memory footprint when it comes to embedded applications, and their capacity to deal with problems of vast scales via decomposition when it comes to machine learning-related applications. In this thesis, we design decomposition methods that tackle both small-scale centralized control problems and larger-scale multi-agent distributed control problems. In addition to the classical objective of devising faster methods, we also delve into less usual aspects of operator splitting schemes, which are nonetheless critical for control. In the centralized case, we propose an algorithm that uses decomposition in order to exactly solve a classical optimal control problem that could otherwise be solved only approximately. In the multi-agent framework, we propose two algorithms, one that achieves faster convergence and a second that reduces communication requirements

    A first-order stochastic primal-dual algorithm with correction step

    Get PDF
    We investigate the convergence properties of a stochastic primal-dual splitting algorithm for solving structured monotone inclusions involving the sum of a cocoercive operator and a composite monotone operator. The proposed method is the stochastic extension to monotone inclusions of a proximal method studied in {\em Y. Drori, S. Sabach, and M. Teboulle, A simple algorithm for a class of nonsmooth convex-concave saddle-point problems, 2015} and {\em I. Loris and C. Verhoeven, On a generalization of the iterative soft-thresholding algorithm for the case of non-separable penalty, 2011} for saddle point problems. It consists in a forward step determined by the stochastic evaluation of the cocoercive operator, a backward step in the dual variables involving the resolvent of the monotone operator, and an additional forward step using the stochastic evaluation of the cocoercive introduced in the first step. We prove weak almost sure convergence of the iterates by showing that the primal-dual sequence generated by the method is stochastic quasi Fej\'er-monotone with respect to the set of zeros of the considered primal and dual inclusions. Additional results on ergodic convergence in expectation are considered for the special case of saddle point models

    Forward-Half-Reflected-Partial inverse-Backward Splitting Algorithm for Solving Monotone Inclusions

    Full text link
    In this article, we proposed a method for numerically solving monotone inclusions in real Hilbert spaces that involve the sum of a maximally monotone operator, a monotone-Lipschitzian operator, a cocoercive operator, and a normal cone to a vector subspace. Our algorithm splits and exploits the intrinsic properties of each operator involved in the inclusion. The proposed method is derived by combining partial inverse techniques and the {\it forward-half-reflected-backward} (FHRB) splitting method proposed by Malitsky and Tam (2020). Our method inherits the advantages of FHRB, equiring only one activation of the Lipschitzian operator, one activation of the cocoercive operator, two projections onto the closed vector subspace, and one calculation of the resolvent of the maximally monotone operator. Furthermore, we develop a method for solving primal-dual inclusions involving a mixture of sums, linear compositions, parallel sums, Lipschitzian operators, cocoercive operators, and normal cones. We apply our method to constrained composite convex optimization problems as a specific example. Finally, in order to compare our proposed method with existing methods in the literature, we provide numerical experiments on constrained total variation least-squares optimization problems. The numerical results are promising

    Stochastic Quasi-Fej\'er Block-Coordinate Fixed Point Iterations with Random Sweeping

    Get PDF
    This work proposes block-coordinate fixed point algorithms with applications to nonlinear analysis and optimization in Hilbert spaces. The asymptotic analysis relies on a notion of stochastic quasi-Fej\'er monotonicity, which is thoroughly investigated. The iterative methods under consideration feature random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and they allow for stochastic errors in the evaluation of the operators. Algorithms using quasinonexpansive operators or compositions of averaged nonexpansive operators are constructed, and weak and strong convergence results are established for the sequences they generate. As a by-product, novel block-coordinate operator splitting methods are obtained for solving structured monotone inclusion and convex minimization problems. In particular, the proposed framework leads to random block-coordinate versions of the Douglas-Rachford and forward-backward algorithms and of some of their variants. In the standard case of m=1m=1 block, our results remain new as they incorporate stochastic perturbations

    Block-proximal methods with spatially adapted acceleration

    Full text link
    We study and develop (stochastic) primal--dual block-coordinate descent methods for convex problems based on the method due to Chambolle and Pock. Our methods have known convergence rates for the iterates and the ergodic gap: O(1/N2)O(1/N^2) if each block is strongly convex, O(1/N)O(1/N) if no convexity is present, and more generally a mixed rate O(1/N2)+O(1/N)O(1/N^2)+O(1/N) for strongly convex blocks, if only some blocks are strongly convex. Additional novelties of our methods include blockwise-adapted step lengths and acceleration, as well as the ability to update both the primal and dual variables randomly in blocks under a very light compatibility condition. In other words, these variants of our methods are doubly-stochastic. We test the proposed methods on various image processing problems, where we employ pixelwise-adapted acceleration
    corecore