2,198 research outputs found

    An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems

    Full text link
    We present an alternating augmented Lagrangian method for convex optimization problems where the cost function is the sum of two terms, one that is separable in the variable blocks, and a second that is separable in the difference between consecutive variable blocks. Examples of such problems include Fused Lasso estimation, total variation denoising, and multi-period portfolio optimization with transaction costs. In each iteration of our method, the first step involves separately optimizing over each variable block, which can be carried out in parallel. The second step is not separable in the variables, but can be carried out very efficiently. We apply the algorithm to segmentation of data based on changes inmean (l_1 mean filtering) or changes in variance (l_1 variance filtering). In a numerical example, we show that our implementation is around 10000 times faster compared with the generic optimization solver SDPT3

    Stochastic Programming with Probability

    Get PDF
    In this work we study optimization problems subject to a failure constraint. This constraint is expressed in terms of a condition that causes failure, representing a physical or technical breakdown. We formulate the problem in terms of a probability constraint, where the level of "confidence" is a modelling parameter and has the interpretation that the probability of failure should not exceed that level. Application of the stochastic Arrow-Hurwicz algorithm poses two difficulties: one is structural and arises from the lack of convexity of the probability constraint, and the other is the estimation of the gradient of the probability constraint. We develop two gradient estimators with decreasing bias via a convolution method and a finite difference technique, respectively, and we provide a full analysis of convergence of the algorithms. Convergence results are used to tune the parameters of the numerical algorithms in order to achieve best convergence rates, and numerical results are included via an example of application in finance

    Time and nodal decomposition with implicit non-anticipativity constraints in dynamic portfolio optimization

    Get PDF
    We propose a decomposition method for the solution of a dynamic portfolio optimization problem which fits the formulation of a multistage stochastic programming problem. The method allows to obtain time and nodal decomposition of the problem in its arborescent formulation applying a discrete version of Pontryagin Maximum Principle. The solution of the decomposed problems is coordinated through a fixed- point weighted iterative scheme. The introduction of an optimization step in the choice of the weights at each iteration allows to solve the original problem in a very efficient way.Stochastic programming, Discrete time optimal control problem, Iterative scheme, Portfolio optimization

    Projection methods in conic optimization

    Get PDF
    There exist efficient algorithms to project a point onto the intersection of a convex cone and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques

    Enhanced first-order methods for convex and nonconvex optimization

    Get PDF
    First-order methods for convex and nonconvex optimization have been an important research topic in the past few years. This talk studies and develops efficient algorithms of first-order type, to solve a variety of problems. We first focus on the widely studied gradient-based methods in composite convex optimization problems that arise extensively in compressed sensing and machine learning. In particular, we discuss an accelerated first-order scheme and its variants, which enjoy the “optimal” convergence rate for the gradient methods in terms of complexity, and their practical behavior.In the second part of the talk, we present alternating direction type of methods solving structured nonlinear nonconvex problems. The problem we are interested in has special structure which allows convenient 2-block variable splitting. Our methods rely on solving convex subproblem and the limit point obtained can be guaranteed to satisfy KKT conditions. Our approach includes the alternating directions method of multipliers (ADMM) and the alternating linearization method (ALM) and we provide convergence rate results for both classes of methods. Moreover, global optimization techniques from polynomial optimization literature are applied to complement our local methods and to provide lower bounds. The application includes some nonconvex problems that have recently arisen in portfolio selection, power system, etc
    corecore