98 research outputs found

    An extension of the projected gradient method to a Banach space setting with application in structural topology optimization

    Get PDF
    For the minimization of a nonlinear cost functional jj under convex constraints the relaxed projected gradient process Ο†k+1=Ο†k+Ξ±k(PH(Ο†kβˆ’Ξ»kβˆ‡Hj(Ο†k))βˆ’Ο†k)\varphi_{k+1} = \varphi_{k} + \alpha_k(P_H(\varphi_{k}-\lambda_k \nabla_H j(\varphi_{k}))-\varphi_{k}) is a well known method. The analysis is classically performed in a Hilbert space HH. We generalize this method to functionals jj which are differentiable in a Banach space. Thus it is possible to perform e.g. an L2L^2 gradient method if jj is only differentiable in L∞L^\infty. We show global convergence using Armijo backtracking in Ξ±k\alpha_k and allow the inner product and the scaling Ξ»k\lambda_k to change in every iteration. As application we present a structural topology optimization problem based on a phase field model, where the reduced cost functional jj is differentiable in H1∩L∞H^1\cap L^\infty. The presented numerical results using the H1H^1 inner product and a pointwise chosen metric including second order information show the expected mesh independency in the iteration numbers. The latter yields an additional, drastic decrease in iteration numbers as well as in computation time. Moreover we present numerical results using a BFGS update of the H1H^1 inner product for further optimization problems based on phase field models

    A Distributed Newton Method for Network Utility Maximization

    Full text link
    Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.Comment: 27 pages, 4 figures, LIDS report, submitted to CDC 201

    Distributed Newton-type algorithms for network resource allocation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 99-101).Most of today's communication networks are large-scale and comprise of agents with local information and heterogeneous preferences, making centralized control and coordination impractical. This motivated much interest in developing and studying distributed algorithms for network resource allocation problems, such as Internet routing, data collection and processing in sensor networks, and cross-layer communication network design. Existing works on network resource allocation problems rely on using dual decomposition and first-order (gradient or subgradient) methods, which involve simple computations and can be implemented in a distributed manner, yet suffer from slow rate of convergence. Second-order methods are faster, but their direct implementation requires computation intensive matrix inversion operations, which couple information across the network, hence cannot be implemented in a decentralized way. This thesis develops and analyzes Newton-type (second-order) distributed methods for network resource allocation problems. In particular, we focus on two general formulations: Network Utility Maximization (NUM), and network flow cost minimization problems. For NUM problems, we develop a distributed Newton-type fast converging algorithm using the properties of self-concordant utility functions. Our algorithm utilizes novel matrix splitting techniques, which enable both primal and dual Newton steps to be computed using iterative schemes in a decentralized manner with limited information exchange. Moreover, the step-size used in our method can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the step-size in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition. The second part of the thesis presents a distributed approach based on a Newtontype method for solving network flow cost minimization problems. The key component of our method is to represent the dual Newton direction as the limit of an iterative procedure involving the graph Laplacian, which can be implemented based only on local information. Using standard Lipschitz conditions, we provide analysis for the convergence properties of our algorithm and show that the method converges superlinearly to an explicitly characterized error neighborhood, even when the iterative schemes used for computing the Newton direction and the stepsize are truncated. We also present some simulation results to illustrate the significant performance gains of this method over the subgradient methods currently used.by Ermin Wei.S.M
    • …
    corecore