24 research outputs found

    Incremental Stochastic Subgradient Algorithms for Convex Optimization

    Full text link
    In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental sub-gradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the sub-gradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing step-size is used. We also obtain almost sure bounds on the algorithm's performance when a constant step-size is used. We then consider \ram{the} Markov randomized incremental subgradient method, which is a non-cyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time non-homogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes, respectively

    Distributed Stochastic Subgradient Projection Algorithms for Convex Optimization

    Full text link
    We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square

    Distributed subgradient projection algorithm for convex optimization

    No full text
    Abstract—We consider constrained minimization of a sum of convex functions over a convex and compact set, when each component function is known only to a specific agent in a time-varying peer to peer network. We study an iterative optimization algorithm in which each agent obtains a weighted average of its own iterate with the iterates of its neighbors, updates the average using the subgradient of its local function and then projects onto the constraint set to generate the new iterate. We obtain error bounds on the limit of the function value when a constant stepsize is used. Index Terms—distributed optimization, time-varying network, subgradient algorithm

    Asynchronous gossip algorithms for stochastic optimization

    No full text
    Abstract — We consider a distributed multi-agent network system where the goal is to minimize an objective function that can be written as the sum of component functions, each of which is known partially (with stochastic errors) to a specific network agent. We propose an asynchronous algorithm that is motivated by random gossip schemes where each agent has a local Poisson clock. At each tick of its local clock, the agent averages its estimate with a randomly chosen neighbor and adjusts the average using the gradient of its local function that is computed with stochastic errors. We investigate the convergence properties of the algorithm for two different classes of functions. First, we consider differentiable, but not necessarily convex functions, and prove that the gradients converge to zero with probability 1. Then, we consider convex, but not necessarily differentiable functions, and show that the iterates converge to an optimal solution almost surely. I
    corecore