2,906 research outputs found

    Hybrid Deterministic-Stochastic Methods for Data Fitting

    Full text link
    Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum. These methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.Comment: 26 pages. Revised proofs of Theorems 2.6 and 3.1, results unchange

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115

    Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method

    Full text link
    This paper addresses optimization problems constrained by partial differential equations with uncertain coefficients. In particular, the robust control problem and the average control problem are considered for a tracking type cost functional with an additional penalty on the variance of the state. The expressions for the gradient and Hessian corresponding to either problem contain expected value operators. Due to the large number of uncertainties considered in our model, we suggest to evaluate these expectations using a multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that this results in the gradient and Hessian corresponding to the MLMC estimator of the original cost functional. Furthermore, we show that the use of certain correlated samples yields a reduction in the total number of samples required. Two optimization methods are investigated: the nonlinear conjugate gradient method and the Newton method. For both, a specific algorithm is provided that dynamically decides which and how many samples should be taken in each iteration. The cost of the optimization up to some specified tolerance Ď„\tau is shown to be proportional to the cost of a gradient evaluation with requested root mean square error Ď„\tau. The algorithms are tested on a model elliptic diffusion problem with lognormal diffusion coefficient. An additional nonlinear term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9, 2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the FrontUQ conference (Sept 5 - Sept 8, 2017

    A variational approach to stable principal component pursuit

    Get PDF
    We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into low-rank and sparse representations. For numerical solutions of our SPCP formulation, we first develop a convex variational framework and then accelerate it with quasi-Newton methods. We show, via synthetic and real data experiments, that our approach offers advantages over the classical SPCP formulations in scalability and practical parameter selection.Comment: 10 pages, 5 figure
    • …
    corecore