10,299 research outputs found

    CS Decomposition Based Bayesian Subspace Estimation

    Get PDF
    In numerous applications, it is required to estimate the principal subspace of the data, possibly from a very limited number of samples. Additionally, it often occurs that some rough knowledge about this subspace is available and could be used to improve subspace estimation accuracy in this case. This is the problem we address herein and, in order to solve it, a Bayesian approach is proposed. The main idea consists of using the CS decomposition of the semi-orthogonal matrix whose columns span the subspace of interest. This parametrization is intuitively appealing and allows for non informative prior distributions of the matrices involved in the CS decomposition and very mild assumptions about the angles between the actual subspace and the prior subspace. The posterior distributions are derived and a Gibbs sampling scheme is presented to obtain the minimum mean-square distance estimator of the subspace of interest. Numerical simulations and an application to real hyperspectral data assess the validity and the performances of the estimator

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    The Noisy Power Method: A Meta Algorithm with Applications

    Full text link
    We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix-vector multiplication. The noisy power method can be seen as a meta-algorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacy-preserving spectral analysis. Our general analysis subsumes several existing ad-hoc convergence bounds and resolves a number of open problems in multiple applications including streaming PCA and privacy-preserving singular vector computation.Comment: NIPS 201
    • 

    corecore