3 research outputs found

    Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

    Full text link
    With the large rising of complex data, the nonconvex models such as nonconvex loss function and nonconvex regularizer are widely used in machine learning and pattern recognition. In this paper, we propose a class of mini-batch stochastic ADMMs (alternating direction method of multipliers) for solving large-scale nonconvex nonsmooth problems. We prove that, given an appropriate mini-batch size, the mini-batch stochastic ADMM without variance reduction (VR) technique is convergent and reaches a convergence rate of O(1/T)O(1/T) to obtain a stationary point of the nonconvex optimization, where TT denotes the number of iterations. Moreover, we extend the mini-batch stochastic gradient method to both the nonconvex SVRG-ADMM and SAGA-ADMM proposed in our initial manuscript \cite{huang2016stochastic}, and prove these mini-batch stochastic ADMMs also reaches the convergence rate of O(1/T)O(1/T) without condition on the mini-batch size. In particular, we provide a specific parameter selection for step size η\eta of stochastic gradients and penalty parameter ρ\rho of augmented Lagrangian function. Finally, extensive experimental results on both simulated and real-world data demonstrate the effectiveness of the proposed algorithms.Comment: We have fixed some errors in the proofs. arXiv admin note: text overlap with arXiv:1610.0275

    Fully Decentralized Federated Learning Based Beamforming Design for UAV Communications

    Full text link
    To handle the data explosion in the era of internet of things (IoT), it is of interest to investigate the decentralized network, with the aim at relaxing the burden to central server along with keeping data privacy. In this work, we develop a fully decentralized federated learning (FL) framework with an inexact stochastic parallel random walk alternating direction method of multipliers (ISPW-ADMM). Performing more communication efficient and enhanced privacy preservation compared with the current state-of-the-art, the proposed ISPW-ADMM can be partially immune to the impacts from time-varying dynamic network and stochastic data collection, while still in fast convergence. Benefits from the stochastic gradients and biased first-order moment estimation, the proposed framework can be applied to any decentralized FL tasks over time-varying graphs. Thus to further demonstrate the practicability of such framework in providing fast convergence, high communication efficiency, and system robustness, we study the extreme learning machine(ELM)-based FL model for robust beamforming (BF) design in UAV communications, as verified by the numerical simulations

    Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization

    Full text link
    In this paper, we propose a faster stochastic alternating direction method of multipliers (ADMM) for nonconvex optimization by using a new stochastic path-integrated differential estimator (SPIDER), called as SPIDER-ADMM. Moreover, we prove that the SPIDER-ADMM achieves a record-breaking incremental first-order oracle (IFO) complexity of O(n+n1/2ϵ1)\mathcal{O}(n+n^{1/2}\epsilon^{-1}) for finding an ϵ\epsilon-approximate stationary point, which improves the deterministic ADMM by a factor O(n1/2)\mathcal{O}(n^{1/2}), where nn denotes the sample size. As one of major contribution of this paper, we provide a new theoretical analysis framework for nonconvex stochastic ADMM methods with providing the optimal IFO complexity. Based on this new analysis framework, we study the unsolved optimal IFO complexity of the existing non-convex SVRG-ADMM and SAGA-ADMM methods, and prove they have the optimal IFO complexity of O(n+n2/3ϵ1)\mathcal{O}(n+n^{2/3}\epsilon^{-1}). Thus, the SPIDER-ADMM improves the existing stochastic ADMM methods by a factor of O(n1/6)\mathcal{O}(n^{1/6}). Moreover, we extend SPIDER-ADMM to the online setting, and propose a faster online SPIDER-ADMM. Our theoretical analysis shows that the online SPIDER-ADMM has the IFO complexity of O(ϵ32)\mathcal{O}(\epsilon^{-\frac{3}{2}}), which improves the existing best results by a factor of O(ϵ12)\mathcal{O}(\epsilon^{-\frac{1}{2}}). Finally, the experimental results on benchmark datasets validate that the proposed algorithms have faster convergence rate than the existing ADMM algorithms for nonconvex optimization.Comment: Published in ICML 2019, 43 pages. arXiv admin note: text overlap with arXiv:1907.1346
    corecore