152 research outputs found

    Distributed Big-Data Optimization via Block Communications

    Get PDF
    We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block-signal tracking scheme, aiming at locally estimating the average of the agents' gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost

    Distributed big-data optimization via block communications

    Get PDF
    We study distributed multi-agent large-scale optimization problems, wherein the cost function is composed of a smooth possibly nonconvex sum-utility plus a DC (Difference-of-Convex) regularizer. We consider the scenario where the dimension of the optimization variables is so large that optimizing and/or transmitting the entire set of variables could cause unaffordable computation and communication overhead. To address this issue, we propose the first distributed algorithm whereby agents optimize and communicate only a portion of their local variables. The scheme hinges on successive convex approximation (SCA) to handle the nonconvexity of the objective function, coupled with a novel block- signal tracking scheme, aiming at locally estimating the average of the agents\u2019 gradients. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Numerical results on a sparse regression problem show the effectiveness of the proposed algorithm and the impact of the block size on its practical convergence speed and communication cost

    ADMM for MPC with state and input constraints, and input nonlinearity

    Full text link
    In this paper we propose an Alternating Direction Method of Multipliers (ADMM) algorithm for solving a Model Predictive Control (MPC) optimization problem, in which the system has state and input constraints and a nonlinear input map. The resulting optimization is nonconvex, and we provide a proof of convergence to a point satisfying necessary conditions for optimality. This general method is proposed as a solution for blended mode control of hybrid electric vehicles, to allow optimization in real time. To demonstrate the properties of the algorithm we conduct numerical experiments on randomly generated problems, and show that the algorithm is effective for achieving an approximate solution, but has limitations when an exact solution is required

    Nonnegative Matrix Inequalities and their Application to Nonconvex Power Control Optimization

    Get PDF
    Maximizing the sum rates in a multiuser Gaussian channel by power control is a nonconvex NP-hard problem that finds engineering application in code division multiple access (CDMA) wireless communication network. In this paper, we extend and apply several fundamental nonnegative matrix inequalities initiated by Friedland and Karlin in a 1975 paper to solve this nonconvex power control optimization problem. Leveraging tools such as the Perron–Frobenius theorem in nonnegative matrix theory, we (1) show that this problem in the power domain can be reformulated as an equivalent convex maximization problem over a closed unbounded convex set in the logarithmic signal-to-interference-noise ratio domain, (2) propose two relaxation techniques that utilize the reformulation problem structure and convexification by Lagrange dual relaxation to compute progressively tight bounds, and (3) propose a global optimization algorithm with ϵ-suboptimality to compute the optimal power control allocation. A byproduct of our analysis is the application of Friedland–Karlin inequalities to inverse problems in nonnegative matrix theory

    Optimization approach with ρ-proximal convexification for Internet traffic control, Journal of Telecommunications and Information Technology, 2005, nr 3

    Get PDF
    The optimization flow control algorithm for traffic control in computer networks, introduced by Steven H. Low, works only for concave utility functions. This assumption is rather optimistic and leads to several problems, especially with streaming applications. In an earlier paper we introduced a modification of the algorithm based on the idea of proximal convexification. In this paper we extend this approach, replacing the proximal method with the ρ-proximal method. The new method mixes the quadratic proximal term with higher-order terms, achieving better results. The algorithms are compared in a simple numerical experiment
    corecore