59,621 research outputs found

    Iterative learning control for constrained linear systems

    Get PDF
    This paper considers iterative learning control for linear systems with convex control input constraints. First, the constrained ILC problem is formulated in a novel successive projection framework. Then, based on this projection method, two algorithms are proposed to solve this constrained ILC problem. The results show that, when perfect tracking is possible, both algorithms can achieve perfect tracking. The two algorithms differ however in that one algorithm needs much less computation than the other. When perfect tracking is not possible, both algorithms can exhibit a form of practical convergence to a "best approximation". The effect of weighting matrices on the performance of the algorithms is also discussed and finally, numerical simulations are given to demonstrate the e®ectiveness of the proposed methods

    Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

    Get PDF
    We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk. We focus on problems without strong convexity, for which all previously known algorithms achieve a convergence rate for function values of O(1/n^{1/2}). We consider and analyze two algorithms that achieve a rate of O(1/n) for classical supervised learning problems. For least-squares regression, we show that averaged stochastic gradient descent with constant step-size achieves the desired rate. For logistic regression, this is achieved by a simple novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations of the loss functions, while (b) preserving the same running time complexity as stochastic gradient descent. For these algorithms, we provide a non-asymptotic analysis of the generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments on standard machine learning benchmarks showing that they often outperform existing approaches

    A Unified Successive Pseudo-Convex Approximation Framework

    Get PDF
    In this paper, we propose a successive pseudo-convex approximation algorithm to efficiently compute stationary points for a large class of possibly nonconvex optimization problems. The stationary points are obtained by solving a sequence of successively refined approximate problems, each of which is much easier to solve than the original problem. To achieve convergence, the approximate problem only needs to exhibit a weak form of convexity, namely, pseudo-convexity. We show that the proposed framework not only includes as special cases a number of existing methods, for example, the gradient method and the Jacobi algorithm, but also leads to new algorithms which enjoy easier implementation and faster convergence speed. We also propose a novel line search method for nondifferentiable optimization problems, which is carried out over a properly constructed differentiable function with the benefit of a simplified implementation as compared to state-of-the-art line search techniques that directly operate on the original nondifferentiable objective function. The advantages of the proposed algorithm are shown, both theoretically and numerically, by several example applications, namely, MIMO broadcast channel capacity computation, energy efficiency maximization in massive MIMO systems and LASSO in sparse signal recovery.Comment: submitted to IEEE Transactions on Signal Processing; original title: A Novel Iterative Convex Approximation Metho

    Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging

    Full text link
    In this paper, we study distributed big-data nonconvex optimization in multi-agent networks. We consider the (constrained) minimization of the sum of a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a convex (possibly) nonsmooth regularizer. Our interest is in big-data problems wherein there is a large number of variables to optimize. If treated by means of standard distributed optimization algorithms, these large-scale problems may be intractable, due to the prohibitive local computation and communication burden at each node. We propose a novel distributed solution method whereby at each iteration agents optimize and then communicate (in an uncoordinated fashion) only a subset of their decision variables. To deal with non-convexity of the cost function, the novel scheme hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate gradient averages; and ii) a novel block-wise consensus-based protocol to perform local block-averaging operations and gradient tacking. Asymptotic convergence to stationary solutions of the nonconvex problem is established. Finally, numerical results show the effectiveness of the proposed algorithm and highlight how the block dimension impacts on the communication overhead and practical convergence speed

    Energy efficiency optimization in MIMO interference channels: A successive pseudoconvex approximation approach

    Get PDF
    In this paper, we consider the (global and sum) energy efficiency optimization problem in downlink multi-input multi-output multi-cell systems, where all users suffer from multi-user interference. This is a challenging problem due to several reasons: 1) it is a nonconvex fractional programming problem, 2) the transmission rate functions are characterized by (complex-valued) transmit covariance matrices, and 3) the processing-related power consumption may depend on the transmission rate. We tackle this problem by the successive pseudoconvex approximation approach, and we argue that pseudoconvex optimization plays a fundamental role in designing novel iterative algorithms, not only because every locally optimal point of a pseudoconvex optimization problem is also globally optimal, but also because a descent direction is easily obtained from every optimal point of a pseudoconvex optimization problem. The proposed algorithms have the following advantages: 1) fast convergence as the structure of the original optimization problem is preserved as much as possible in the approximate problem solved in each iteration, 2) easy implementation as each approximate problem is suitable for parallel computation and its solution has a closed-form expression, and 3) guaranteed convergence to a stationary point or a Karush-Kuhn-Tucker point. The advantages of the proposed algorithm are also illustrated numerically.Comment: submitted to IEEE Transactions on Signal Processin

    Improving Resource Efficiency with Partial Resource Muting for Future Wireless Networks

    Full text link
    We propose novel resource allocation algorithms that have the objective of finding a good tradeoff between resource reuse and interference avoidance in wireless networks. To this end, we first study properties of functions that relate the resource budget available to network elements to the optimal utility and to the optimal resource efficiency obtained by solving max-min utility optimization problems. From the asymptotic behavior of these functions, we obtain a transition point that indicates whether a network is operating in an efficient noise-limited regime or in an inefficient interference-limited regime for a given resource budget. For networks operating in the inefficient regime, we propose a novel partial resource muting scheme to improve the efficiency of the resource utilization. The framework is very general. It can be applied not only to the downlink of 4G networks, but also to 5G networks equipped with flexible duplex mechanisms. Numerical results show significant performance gains of the proposed scheme compared to the solution to the max-min utility optimization problem with full frequency reuse.Comment: 8 pages, 9 figures, to appear in WiMob 201

    Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks

    Full text link
    We study nonconvex distributed optimization in multiagent networks where the communications between nodes is modeled as a time-varying sequence of arbitrary digraphs. We introduce a novel broadcast-based distributed algorithmic framework for the (constrained) minimization of the sum of a smooth (possibly nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually employed to enforce some structure in the solution, typically sparsity. The proposed method hinges on Successive Convex Approximation (SCA) techniques coupled with i) a tracking mechanism instrumental to locally estimate the gradients of agents' cost functions; and ii) a novel broadcast protocol to disseminate information and distribute the computation among the agents. Asymptotic convergence to stationary solutions is established. A key feature of the proposed algorithm is that it neither requires the double-stochasticity of the consensus matrices (but only column stochasticity) nor the knowledge of the graph sequence to implement. To the best of our knowledge, the proposed framework is the first broadcast-based distributed algorithm for convex and nonconvex constrained optimization over arbitrary, time-varying digraphs. Numerical results show that our algorithm outperforms current schemes on both convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA, US
    corecore