14 research outputs found

    Distributed Stochastic Nonconvex Optimization and Learning based on Successive Convex Approximation

    Full text link
    We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function (the agents' sum-utility) plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive averaging to recover the expected gradient of the sum-utility function. Almost sure convergence to (stationary) solutions of the nonconvex problem is established. Finally, the method is applied to distributed stochastic training of neural networks. Numerical results confirm the theoretical claims, and illustrate the advantages of the proposed method with respect to other methods available in the literature.Comment: Proceedings of 2019 Asilomar Conference on Signals, Systems, and Computer

    Energy efficiency optimization in MIMO interference channels: A successive pseudoconvex approximation approach

    Get PDF
    In this paper, we consider the (global and sum) energy efficiency optimization problem in downlink multi-input multi-output multi-cell systems, where all users suffer from multi-user interference. This is a challenging problem due to several reasons: 1) it is a nonconvex fractional programming problem, 2) the transmission rate functions are characterized by (complex-valued) transmit covariance matrices, and 3) the processing-related power consumption may depend on the transmission rate. We tackle this problem by the successive pseudoconvex approximation approach, and we argue that pseudoconvex optimization plays a fundamental role in designing novel iterative algorithms, not only because every locally optimal point of a pseudoconvex optimization problem is also globally optimal, but also because a descent direction is easily obtained from every optimal point of a pseudoconvex optimization problem. The proposed algorithms have the following advantages: 1) fast convergence as the structure of the original optimization problem is preserved as much as possible in the approximate problem solved in each iteration, 2) easy implementation as each approximate problem is suitable for parallel computation and its solution has a closed-form expression, and 3) guaranteed convergence to a stationary point or a Karush-Kuhn-Tucker point. The advantages of the proposed algorithm are also illustrated numerically.Comment: submitted to IEEE Transactions on Signal Processin
    corecore