55 research outputs found

    MAGMA: Multi-level accelerated gradient mirror descent algorithm for large-scale convex composite minimization

    Full text link
    Composite convex optimization models arise in several applications, and are especially prevalent in inverse problems with a sparsity inducing norm and in general convex optimization with simple constraints. The most widely used algorithms for convex composite models are accelerated first order methods, however they can take a large number of iterations to compute an acceptable solution for large-scale problems. In this paper we propose to speed up first order methods by taking advantage of the structure present in many applications and in image processing in particular. Our method is based on multi-level optimization methods and exploits the fact that many applications that give rise to large scale models can be modelled using varying degrees of fidelity. We use Nesterov's acceleration techniques together with the multi-level approach to achieve O(1/Ļµ)\mathcal{O}(1/\sqrt{\epsilon}) convergence rate, where Ļµ\epsilon denotes the desired accuracy. The proposed method has a better convergence rate than any other existing multi-level method for convex problems, and in addition has the same rate as accelerated methods, which is known to be optimal for first-order methods. Moreover, as our numerical experiments show, on large-scale face recognition problems our algorithm is several times faster than the state of the art

    A stochastic minimum principle and an adaptive pathwise algorithm for stochastic optimal control

    Get PDF
    We present a numerical method for finite-horizon stochastic optimal control models. We derive a stochastic minimum principle (SMP) and then develop a numerical method based on the direct solution of the SMP. The method combines Monte Carlo pathwise simulation and non-parametric interpolation methods. We present results from a standard linear quadratic control model, and a realistic case study that captures the stochastic dynamics of intermittent power generation in the context of optimal economic dispatch models.National Science Foundation (U.S.) (Grant 1128147)United States. Dept. of Energy. Office of Science (Biological and Environmental Research Program Grant DE-SC0005171)United States. Dept. of Energy. Office of Science (Biological and Environmental Research Program Grant DE-SC0003906

    Bounding Option Prices Using SDP With Change Of Numeraire

    Get PDF
    Recently, given the first few moments, tight upper and lower bounds of the no arbitrage prices can be obtained by solving semidefinite programming (SDP) or linear programming (LP) problems. In this paper, we compare SDP and LP formulations of the European-style options pricing problem and prefer SDP formulations due to the simplicity of moments constraints. We propose to employ the technique of change of numeraire when using SDP to bound the European type of options. In fact, this problem can then be cast as a truncated Hausdorff moment problem which has necessary and sufficient moment conditions expressed by positive semidefinite moment and localizing matrices. With four moments information we show stable numerical results for bounding European call options and exchange options. Moreover, A hedging strategy is also identified by the dual formulation.moments of measures, semidefinite programming, linear programming, options pricing, change of numeraire

    Simba: A Scalable Bilevel Preconditioned Gradient Method for Fast Evasion of Flat Areas and Saddle Points

    Full text link
    The convergence behaviour of first-order methods can be severely slowed down when applied to high-dimensional non-convex functions due to the presence of saddle points. If, additionally, the saddles are surrounded by large plateaus, it is highly likely that the first-order methods will converge to sub-optimal solutions. In machine learning applications, sub-optimal solutions mean poor generalization performance. They are also related to the issue of hyper-parameter tuning, since, in the pursuit of solutions that yield lower errors, a tremendous amount of time is required on selecting the hyper-parameters appropriately. A natural way to tackle the limitations of first-order methods is to employ the Hessian information. However, methods that incorporate the Hessian do not scale or, if they do, they are very slow for modern applications. Here, we propose Simba, a scalable preconditioned gradient method, to address the main limitations of the first-order methods. The method is very simple to implement. It maintains a single precondition matrix that it is constructed as the outer product of the moving average of the gradients. To significantly reduce the computational cost of forming and inverting the preconditioner, we draw links with the multilevel optimization methods. These links enables us to construct preconditioners in a randomized manner. Our numerical experiments verify the scalability of Simba as well as its efficacy near saddles and flat areas. Further, we demonstrate that Simba offers a satisfactory generalization performance on standard benchmark residual networks. We also analyze Simba and show its linear convergence rate for strongly convex functions

    Mean Variance Optimization of Non-Linear Systems and Worst-case Analysis

    Get PDF
    In this paper, we consider expected value, variance and worst-case optimization of nonlinear models. We present algorithms for computing optimal expected values, and variance, based on iterative Taylor expansions. We establish convergence and consider the relative merits of policies beaded on expected value optimization and worst-case robustness. The latter is a minimax strategy and ensures optimal cover in view of the worst-case scenario(s) while the former is optimal expected performance in a stochastic setting. Both approaches are used with a macroeconomic policy model to illustrate relative performances, robustness and trade-offs between the strategies.

    Decomposition-Based Method for Sparse Semidefinite Relaxations of Polynomial Optimization Problems

    Get PDF
    We consider polynomial optimization problems pervaded by a sparsity pattern. It has been shown in [1, 2] that the optimal solution of a polynomial programming problem with structured sparsity can be computed by solving a series of semidefinite relaxations that possess the same kind of sparsity. We aim at solving the former relaxations with a decompositionbased method, which partitions the relaxations according to their sparsity pattern. The decomposition-based method that we propose is an extension to semidefinite programming of the Benders decomposition for linear programs [3] .Polynomial optimization, Semidefinite programming, Sparse SDP relaxations, Benders decomposition
    • ā€¦
    corecore