3,618 research outputs found

    FROST -- Fast row-stochastic optimization with uncoordinated step-sizes

    Full text link
    In this paper, we discuss distributed optimization over directed graphs, where doubly-stochastic weights cannot be constructed. Most of the existing algorithms overcome this issue by applying push-sum consensus, which utilizes column-stochastic weights. The formulation of column-stochastic weights requires each agent to know (at least) its out-degree, which may be impractical in e.g., broadcast-based communication protocols. In contrast, we describe FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an optimization algorithm applicable to directed graphs that does not require the knowledge of out-degrees; the implementation of which is straightforward as each agent locally assigns weights to the incoming information and locally chooses a suitable step-size. We show that FROST converges linearly to the optimal solution for smooth and strongly-convex functions given that the largest step-size is positive and sufficiently small.Comment: Submitted for journal publication, currently under revie

    Accelerated ABAB/Push-Pull Methods for Distributed Optimization over Time-Varying Directed Networks

    Full text link
    This paper investigates a novel approach for solving the distributed optimization problem in which multiple agents collaborate to find the global decision that minimizes the sum of their individual cost functions. First, the ABAB/Push-Pull gradient-based algorithm is considered, which employs row- and column-stochastic weights simultaneously to track the optimal decision and the gradient of the global cost function, ensuring consensus on the optimal decision. Building on this algorithm, we then develop a general algorithm that incorporates acceleration techniques, such as heavy-ball momentum and Nesterov momentum, as well as their combination with non-identical momentum parameters. Previous literature has established the effectiveness of acceleration methods for various gradient-based distributed algorithms and demonstrated linear convergence for static directed communication networks. In contrast, we focus on time-varying directed communication networks and establish linear convergence of the methods to the optimal solution, when the agents' cost functions are smooth and strongly convex. Additionally, we provide explicit bounds for the step-size value and momentum parameters, based on the properties of the cost functions, the mixing matrices, and the graph connectivity structures. Our numerical results illustrate the benefits of the proposed acceleration techniques on the ABAB/Push-Pull algorithm
    • …
    corecore