23 research outputs found

    An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices

    Full text link
    Standard regularization methods that are used to compute solutions to ill-posed inverse problems require knowledge of the forward model. In many real-life applications, the forward model is not known, but training data is readily available. In this paper, we develop a new framework that uses training data, as a substitute for knowledge of the forward model, to compute an optimal low-rank regularized inverse matrix directly, allowing for very fast computation of a regularized solution. We consider a statistical framework based on Bayes and empirical Bayes risk minimization to analyze theoretical properties of the problem. We propose an efficient rank update approach for computing an optimal low-rank regularized inverse matrix for various error measures. Numerical experiments demonstrate the benefits and potential applications of our approach to problems in signal and image processing.Comment: 24 pages, 11 figure

    Optimal CUR Matrix Decompositions

    Full text link
    The CUR decomposition of an m×nm \times n matrix AA finds an m×cm \times c matrix CC with a subset of c<nc < n columns of A,A, together with an r×nr \times n matrix RR with a subset of r<mr < m rows of A,A, as well as a c×rc \times r low-rank matrix UU such that the matrix CURC U R approximates the matrix A,A, that is, ∣∣A−CUR∣∣F2≤(1+ϵ)∣∣A−Ak∣∣F2 || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2, where ∣∣.∣∣F||.||_F denotes the Frobenius norm and AkA_k is the best m×nm \times n matrix of rank kk constructed via the SVD. We present input-sparsity-time and deterministic algorithms for constructing such a CUR decomposition where c=O(k/ϵ)c=O(k/\epsilon) and r=O(k/ϵ)r=O(k/\epsilon) and rank(U)=k(U) = k. Up to constant factors, our algorithms are simultaneously optimal in c,r,c, r, and rank(U)(U).Comment: small revision in lemma 4.

    Generalized brillinger-like transforms

    Get PDF
    Artículo científicoWe propose novel transforms of stochastic vectors, called the generalized Brillinger transforms (GBT1 and GBT2), which are generalizations of the Brillinger transform (BT). The GBT1 extends the BT to the cases when the covariance matrix and the weighting matrix are singular, and moreover, the weighting matrix is not necessarily symmetric. We show that the GBT1 may computationally be preferable over another related optimal technique, the generic Karhunen–Loève transform (GKLT). The GBT2 generalizes the GBT1 to provide, under the condition we impose, better associated accuracy than that of the GBT1. It is achieved because of the increase in a number of parameters to optimize compared to that in the GBT1
    corecore