23,279 research outputs found

    A unified algorithmic approach to distributed optimization

    Full text link
    We address general optimization problems formulated on networks. Each node in the network has a function, and the goal is to find a vec-tor x ∈ Rn that minimizes the sum of all the functions. We assume that each function depends on a set of components of x, not neces-sarily on all of them. This creates additional structure in the prob-lem, which can be captured by the classification scheme we develop. This scheme not only to enables us to design an algorithm that solves very general distributed optimization problems, but also allows us to categorize prior algorithms and applications. Our general-purpose algorithm shows a performance superior to prior algorithms, includ-ing algorithms that are application-specific. Index Terms — Distributed optimization, sensor networks 1

    Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates

    Full text link
    Distributed and federated learning algorithms and techniques associated primarily with minimization problems. However, with the increase of minimax optimization and variational inequality problems in machine learning, the necessity of designing efficient distributed/federated learning approaches for these problems is becoming more apparent. In this paper, we provide a unified convergence analysis of communication-efficient local training methods for distributed variational inequality problems (VIPs). Our approach is based on a general key assumption on the stochastic estimates that allows us to propose and analyze several novel local training algorithms under a single framework for solving a class of structured non-monotone VIPs. We present the first local gradient descent-accent algorithms with provable improved communication complexity for solving distributed variational inequalities on heterogeneous data. The general algorithmic framework recovers state-of-the-art algorithms and their sharp convergence guarantees when the setting is specialized to minimization or minimax optimization problems. Finally, we demonstrate the strong performance of the proposed algorithms compared to state-of-the-art methods when solving federated minimax optimization problems
    • …
    corecore