30,468 research outputs found

    Randomized GCUR decompositions

    Full text link
    By exploiting the random sampling techniques, this paper derives an efficient randomized algorithm for computing a generalized CUR decomposition, which provides low-rank approximations of both matrices simultaneously in terms of some of their rows and columns. For large-scale data sets that are expensive to store and manipulate, a new variant of the discrete empirical interpolation method known as L-DEIM, which needs much lower cost and provides a significant acceleration in practice, is also combined with the random sampling approach to further improve the efficiency of our algorithm. Moreover, adopting the randomized algorithm to implement the truncation process of restricted singular value decomposition (RSVD), combined with the L-DEIM procedure, we propose a fast algorithm for computing an RSVD based CUR decomposition, which provides a coordinated low-rank approximation of the three matrices in a CUR-type format simultaneously and provides advantages over the standard CUR approximation for some applications. We establish detailed probabilistic error analysis for the algorithms and provide numerical results that show the promise of our approaches

    Optimal low-rank approximations of Bayesian linear inverse problems

    Full text link
    In the Bayesian approach to inverse problems, data are often informative, relative to the prior, only on a low-dimensional subspace of the parameter space. Significant computational savings can be achieved by using this subspace to characterize and approximate the posterior distribution of the parameters. We first investigate approximation of the posterior covariance matrix as a low-rank update of the prior covariance matrix. We prove optimality of a particular update, based on the leading eigendirections of the matrix pencil defined by the Hessian of the negative log-likelihood and the prior precision, for a broad class of loss functions. This class includes the F\"{o}rstner metric for symmetric positive definite matrices, as well as the Kullback-Leibler divergence and the Hellinger distance between the associated distributions. We also propose two fast approximations of the posterior mean and prove their optimality with respect to a weighted Bayes risk under squared-error loss. These approximations are deployed in an offline-online manner, where a more costly but data-independent offline calculation is followed by fast online evaluations. As a result, these approximations are particularly useful when repeated posterior mean evaluations are required for multiple data sets. We demonstrate our theoretical results with several numerical examples, including high-dimensional X-ray tomography and an inverse heat conduction problem. In both of these examples, the intrinsic low-dimensional structure of the inference problem can be exploited while producing results that are essentially indistinguishable from solutions computed in the full space

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Towards tensor-based methods for the numerical approximation of the Perron-Frobenius and Koopman operator

    Full text link
    The global behavior of dynamical systems can be studied by analyzing the eigenvalues and corresponding eigenfunctions of linear operators associated with the system. Two important operators which are frequently used to gain insight into the system's behavior are the Perron-Frobenius operator and the Koopman operator. Due to the curse of dimensionality, computing the eigenfunctions of high-dimensional systems is in general infeasible. We will propose a tensor-based reformulation of two numerical methods for computing finite-dimensional approximations of the aforementioned infinite-dimensional operators, namely Ulam's method and Extended Dynamic Mode Decomposition (EDMD). The aim of the tensor formulation is to approximate the eigenfunctions by low-rank tensors, potentially resulting in a significant reduction of the time and memory required to solve the resulting eigenvalue problems, provided that such a low-rank tensor decomposition exists. Typically, not all variables of a high-dimensional dynamical system contribute equally to the system's behavior, often the dynamics can be decomposed into slow and fast processes, which is also reflected in the eigenfunctions. Thus, the weak coupling between different variables might be approximated by low-rank tensor cores. We will illustrate the efficiency of the tensor-based formulation of Ulam's method and EDMD using simple stochastic differential equations

    Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

    Full text link
    In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204
    • …
    corecore