3,029 research outputs found

    A Distributed Newton Method for Network Utility Maximization

    Full text link
    Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201

    On Dual Convergence of the Distributed Newton Method for Network Utility Maximization

    Get PDF
    The existing distributed algorithms for Network Utility Maximization (NUM) problems mostly rely on dual decomposition and first-order (gradient or subgradient) methods, which suffer from slow rate of convergence. Recent works [17] and [18] proposed an alternative distributed Newton-type second-order algorithm for solving NUM problems with self-concordant utility functions. This algorithm is implemented in the primal space and involves for each primal iteration computing the dual variables using a finitely terminated iterative scheme obtained through novel matrix splitting techniques. These works presented a convergence rate analysis for the primal iterations and showed that if the error level in the Newton direction (resulting from finite termination of dual iterations) is below a certain threshold, then the algorithm achieves local quadratic convergence rate to an error neighborhood of the optimal solution. This paper builds on these works and presents a convergence rate analysis for the dual iterations that enables us to explicitly compute at each primal iteration the number of dual steps that can satisfy the error level. This yields for the first time a fully distributed second order method for NUM problems with local quadratic convergence guarantee. Simulation results demonstrate significant convergence rate improvement of our algorithm, even when only one dual update is implemented per primal iteration, relative to the existing first-order methods based on dual decomposition.National Science Foundation (U.S.). (Career) (Grant number DMI-0545910)United States. Air Force Office of Scientific Research. Multidisciplinary University Research Initiative (R6756-G2)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N0001408107474)United States. Army Research Office. Multidisciplinary University Research Initiative. ScalableUnited States. Air Force Office of Scientific Research. Complex Networks Progra

    Distributed Newton-type algorithms for network resource allocation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 99-101).Most of today's communication networks are large-scale and comprise of agents with local information and heterogeneous preferences, making centralized control and coordination impractical. This motivated much interest in developing and studying distributed algorithms for network resource allocation problems, such as Internet routing, data collection and processing in sensor networks, and cross-layer communication network design. Existing works on network resource allocation problems rely on using dual decomposition and first-order (gradient or subgradient) methods, which involve simple computations and can be implemented in a distributed manner, yet suffer from slow rate of convergence. Second-order methods are faster, but their direct implementation requires computation intensive matrix inversion operations, which couple information across the network, hence cannot be implemented in a decentralized way. This thesis develops and analyzes Newton-type (second-order) distributed methods for network resource allocation problems. In particular, we focus on two general formulations: Network Utility Maximization (NUM), and network flow cost minimization problems. For NUM problems, we develop a distributed Newton-type fast converging algorithm using the properties of self-concordant utility functions. Our algorithm utilizes novel matrix splitting techniques, which enable both primal and dual Newton steps to be computed using iterative schemes in a decentralized manner with limited information exchange. Moreover, the step-size used in our method can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the step-size in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition. The second part of the thesis presents a distributed approach based on a Newtontype method for solving network flow cost minimization problems. The key component of our method is to represent the dual Newton direction as the limit of an iterative procedure involving the graph Laplacian, which can be implemented based only on local information. Using standard Lipschitz conditions, we provide analysis for the convergence properties of our algorithm and show that the method converges superlinearly to an explicitly characterized error neighborhood, even when the iterative schemes used for computing the Newton direction and the stepsize are truncated. We also present some simulation results to illustrate the significant performance gains of this method over the subgradient methods currently used.by Ermin Wei.S.M

    Distributed Large Scale Network Utility Maximization

    Full text link
    Recent work by Zymnis et al. proposes an efficient primal-dual interior-point method, using a truncated Newton method, for solving the network utility maximization (NUM) problem. This method has shown superior performance relative to the traditional dual-decomposition approach. Other recent work by Bickson et al. shows how to compute efficiently and distributively the Newton step, which is the main computational bottleneck of the Newton method, utilizing the Gaussian belief propagation algorithm. In the current work, we combine both approaches to create an efficient distributed algorithm for solving the NUM problem. Unlike the work of Zymnis, which uses a centralized approach, our new algorithm is easily distributed. Using an empirical evaluation we show that our new method outperforms previous approaches, including the truncated Newton method and dual-decomposition methods. As an additional contribution, this is the first work that evaluates the performance of the Gaussian belief propagation algorithm vs. the preconditioned conjugate gradient method, for a large scale problem.Comment: In the International Symposium on Information Theory (ISIT) 200

    Optimization flow control with Newton-like algorithm

    Get PDF
    We proposed earlier an optimization approach to reactive flow control where the objective of the control is to maximize the aggregate utility of all sources over their transmission rates. The control mechanism is derived as a gradient projection algorithm to solve the dual problem. In this paper we extend the algorithm to a scaled gradient projection. The diagonal scaling matrix approximates the diagonal terms of the Hessian and can be computed at individual links using the same information required by the unscaled algorithm. We prove the convergence of the scaled algorithm and present simulation results that illustrate its superiority to the unscaled algorithm

    Fast, Accurate Second Order Methods for Network Optimization

    Full text link
    Dual descent methods are commonly used to solve network flow optimization problems, since their implementation can be distributed over the network. These algorithms, however, often exhibit slow convergence rates. Approximate Newton methods which compute descent directions locally have been proposed as alternatives to accelerate the convergence rates of conventional dual descent. The effectiveness of these methods, is limited by the accuracy of such approximations. In this paper, we propose an efficient and accurate distributed second order method for network flow problems. The proposed approach utilizes the sparsity pattern of the dual Hessian to approximate the the Newton direction using a novel distributed solver for symmetric diagonally dominant linear equations. Our solver is based on a distributed implementation of a recent parallel solver of Spielman and Peng (2014). We analyze the properties of the proposed algorithm and show that, similar to conventional Newton methods, superlinear convergence within a neighbor- hood of the optimal value is attained. We finally demonstrate the effectiveness of the approach in a set of experiments on randomly generated networks.Comment: arXiv admin note: text overlap with arXiv:1502.0315
    • …
    corecore