4,623 research outputs found

    Quasi-Newton particle Metropolis-Hastings

    Full text link
    Particle Metropolis-Hastings enables Bayesian parameter inference in general nonlinear state space models (SSMs). However, in many implementations a random walk proposal is used and this can result in poor mixing if not tuned correctly using tedious pilot runs. Therefore, we consider a new proposal inspired by quasi-Newton algorithms that may achieve similar (or better) mixing with less tuning. An advantage compared to other Hessian based proposals, is that it only requires estimates of the gradient of the log-posterior. A possible application is parameter inference in the challenging class of SSMs with intractable likelihoods. We exemplify this application and the benefits of the new proposal by modelling log-returns of future contracts on coffee by a stochastic volatility model with α\alpha-stable observations.Comment: 23 pages, 5 figures. Accepted for the 17th IFAC Symposium on System Identification (SYSID), Beijing, China, October 201

    On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence

    Get PDF
    We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal ±\pm rank-rr symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank-rr modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.Comment: arXiv admin note: text overlap with arXiv:1206.115

    Preconditioning issues in the numerical solution of nonlinear equations and nonlinear least squares

    Get PDF
    Second order methods for optimization call for the solution of sequences of linear systems. In this survey we will discuss several issues related to the preconditioning of such sequences. Covered topics include both techniques for building updates of factorized preconditioners and quasi-Newton approaches. Sequences of unsymmetric linear systems arising in Newton-Krylov methods will be considered as well as symmetric positive definite sequences arising in the solution of nonlinear least-squares by Truncated Gauss-Newton methods

    A Class of Preconditioners for Large Indefinite Linear Systems, as by-product of Krylov subspace Methods: Part I

    Get PDF
    We propose a class of preconditioners, which are also tailored for symmetric linear systems from linear algebra and nonconvex optimization. Our preconditioners are specifically suited for large linear systems and may be obtained as by-product of Krylov subspace solvers. Each preconditioner in our class is identified by setting the values of a pair of parameters and a scaling matrix, which are user-dependent, and may be chosen according with the structure of the problem in hand. We provide theoretical properties for our preconditioners. In particular, we show that our preconditioners both shift some eigenvalues of the system matrix to controlled values, and they tend to reduce the modulus of most of the other eigenvalues. In a companion paper we study some structural properties of our class of preconditioners, and report the results on a significant numerical experience.preconditioners; large indefinite linear systems; large scale nonconvex optimization; Krylov subspace methods

    A memory-efficient MultiVector Quasi-Newton method for black-box Fluid-Structure Interaction coupling

    Get PDF
    In this work we present a novel Quasi-Newton technique for the black-box partitioned coupling of interface coupled problems. The new RandomiZed Multi-Vector Quasi-Newton method stems from the combination of the original Multi-Vector Quasi-Newton technique with the randomized Singular Value Decomposition algorithm, avoiding thus any dense DOFs-sized square matrix operation. This results in a reduction from quadratic to linear complexity in terms of the number of DOFs. Besides this, the need of storing the old inverse Jacobian is also avoided. Instead, only two very “thin” matrices are required to be saved, thus implying a much smaller memory footprint. Furthermore, our proposal can be used free of any user-defined parameter. The article describes the application of the method to the FSI interface residual equations in both Interface Quasi-Newton and Interface Block Quasi-Newton forms. For the latter, we also derive a closed form expression for the update, thus avoiding any linear system of equations resolution, by applying the Woodbury matrix identity to the inverse Jacobian decomposition matrices.This research is partly supported by the European High-Performance Computing Joint Undertaking (JU) through the project eFlows4HPC (grant agreement No 955558). The JU receives support from the European Union’s Horizon 2020 research and innovation program and Spain, Germany, France, Italy, Poland, Switzerland, Norway. The authors also acknowledge financial support from the Spanish Ministry of Economy and Competitiveness, through the “Severo Ochoa Programme for Centres of Excellence in R&D” (CEX2018-000797-S).Peer ReviewedPostprint (published version

    A Class of Preconditioners for Large Indefinite Linear Systems, as by-product of Krylov subspace Methods: Part II

    Get PDF
    In this paper we consider the parameter dependent class of preconditioners M(a,d,D) defined in the companion paper The latter was constructed by using information from a Krylov subspace method, adopted to solve the large symmetric linear system Ax = b. We first estimate the condition number of the preconditioned matrix M(a,d,D). Then our preconditioners, which are independent of the choice of the Krylov subspace method adopted, proved to be effective also when solving sequences of slowly changing linear systems, in unconstrained optimization and linear algebra frameworks. A numerical experience is provided to give evidence of the performance of M(a,d,D).preconditioners; large indefinite linear systems; large scale nonconvex optimization; Krylov subspace methods
    • 

    corecore