3 research outputs found

    Convergence of Limited Communications Gradient Methods

    Full text link
    Distributed optimization increasingly plays a central role in economical and sustainable operation of cyber-physical systems. Nevertheless, the complete potential of the technology has not yet been fully exploited in practice due to communication limitations posed by the real-world infrastructures. This work investigates fundamental properties of distributed optimization based on gradient methods, where gradient information is communicated using limited number of bits. In particular, a general class of quantized gradient methods are studied where the gradient direction is approximated by a finite quantization set. Sufficient and necessary conditions are provided on such a quantization set to guarantee that the methods minimize any convex objective function with Lipschitz continuous gradient and a nonempty and bounded set of optimizers. A lower bound on the cardinality of the quantization set is provided, along with specific examples of minimal quantizations. Convergence rate results are established that connect the fineness of the quantization and the number of iterations needed to reach a predefined solution accuracy. Generalizations of the results to a relevant class of constrained problems using projections are considered. Finally, the results are illustrated by simulations of practical systems.Comment: 16 pages, 8 figure

    Compressed Gradient Methods with Hessian-Aided Error Compensation

    Full text link
    The emergence of big data has caused a dramatic shift in the operating regime for optimization algorithms. The performance bottleneck, which used to be computations, is now often communications. Several gradient compression techniques have been proposed to reduce the communication load at the price of a loss in solution accuracy. Recently, it has been shown how compression errors can be compensated for in the optimization algorithm to improve the solution accuracy. Even though convergence guarantees for error-compensated algorithms have been established, there is very limited theoretical support for quantifying the observed improvements in solution accuracy. In this paper, we show that Hessian-aided error compensation, unlike other existing schemes, avoids the accumulation of compression errors on quadratic problems. We also present strong convergence guarantees of Hessian-based error compensation for stochastic gradient descent. Our numerical experiments highlight the benefits of Hessian-based error compensation, and demonstrate that similar convergence improvements are attained when only a diagonal Hessian approximation is used.Comment: 15 pages, 7 figure

    FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated Learning

    Full text link
    This paper investigates efficient distributed training of a Federated Learning~(FL) model over a wireless network of wireless devices. The communication iterations of the distributed training algorithm may be substantially deteriorated or even blocked by the effects of the devices' background traffic, packet losses, congestion, or latency. We abstract the communication-computation impacts as an `iteration cost' and propose a cost-aware causal FL algorithm~(FedCau) to tackle this problem. We propose an iteration-termination method that trade-offs the training performance and networking costs. We apply our approach when clients use the slotted-ALOHA, the carrier-sense multiple access with collision avoidance~(CSMA/CA), and the orthogonal frequency-division multiple access~(OFDMA) protocols. We show that, given a total cost budget, the training performance degrades as either the background communication traffic or the dimension of the training problem increases. Our results demonstrate the importance of proactively designing optimal cost-efficient stopping criteria to avoid unnecessary communication-computation costs to achieve only a marginal FL training improvement. We validate our method by training and testing FL over the MNIST dataset. Finally, we apply our approach to existing communication efficient FL methods from the literature, achieving further efficiency. We conclude that cost-efficient stopping criteria are essential for the success of practical FL over wireless networks
    corecore