7,872 research outputs found

    NUM-Based Rate Allocation for Streaming Traffic via Sequential Convex Programming

    Full text link
    In recent years, there has been an increasing demand for ubiquitous streaming like applications in data networks. In this paper, we concentrate on NUM-based rate allocation for streaming applications with the so-called S-curve utility functions. Due to non-concavity of such utility functions, the underlying NUM problem would be non-convex for which dual methods might become quite useless. To tackle the non-convex problem, using elementary techniques we make the utility of the network concave, however this results in reverse-convex constraints which make the problem non-convex. To deal with such a transformed NUM, we leverage Sequential Convex Programming (SCP) approach to approximate the non-convex problem by a series of convex ones. Based on this approach, we propose a distributed rate allocation algorithm and demonstrate that under mild conditions, it converges to a locally optimal solution of the original NUM. Numerical results validate the effectiveness, in terms of tractable convergence of the proposed rate allocation algorithm.Comment: 6 pages, conference submissio

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Fast exact variable order affine projection algorithm

    Full text link
    Variable order affine projection algorithms have been recently presented to be used when not only the convergence speed of the algorithm has to be adjusted but also its computational cost and its final residual error. These kind of affine projection (AP) algorithms improve the standard AP algorithm performance at steady state by reducing the residual mean square error. Furthermore these algorithms optimize computational cost by dynamically adjusting their projection order to convergence speed requirements. The main cost of the standard AP algorithm is due to the matrix inversion that appears in the coefficient update equation. Most efforts to decrease the computational cost of these algorithms have focused on the optimization of this matrix inversion. This paper deals with optimization of the computational cost of variable order AP algorithms by recursive calculation of the inverse signal matrix. Thus, a fast exact variable order AP algorithm is proposed. Exact iterative expressions to calculate the inverse matrix when the algorithm projection order either increases or decreases are incorporated into a variable order AP algorithm leading to a reduced complexity implementation. The simulation results show the proposed algorithm performs similarly to the variable order AP algorithms and it has a lower computational complexity. © 2012 Elsevier B.V. All rights reserved.Partially supported by TEC2009-13741, PROMETEO 2009/0013, GV/ 2010/027, ACOMP/2010/006 and UPV PAID-06-09.Ferrer Contreras, M.; Gonzalez, A.; Diego Antón, MD.; Piñero Sipán, MG. (2012). Fast exact variable order affine projection algorithm. Signal Processing. 92(9):2308-2314. https://doi.org/10.1016/j.sigpro.2012.03.007S2308231492
    • …
    corecore