65,450 research outputs found

    Randomized Dynamic Mode Decomposition

    Full text link
    This paper presents a randomized algorithm for computing the near-optimal low-rank dynamic mode decomposition (DMD). Randomized algorithms are emerging techniques to compute low-rank matrix approximations at a fraction of the cost of deterministic algorithms, easing the computational challenges arising in the area of `big data'. The idea is to derive a small matrix from the high-dimensional data, which is then used to efficiently compute the dynamic modes and eigenvalues. The algorithm is presented in a modular probabilistic framework, and the approximation quality can be controlled via oversampling and power iterations. The effectiveness of the resulting randomized DMD algorithm is demonstrated on several benchmark examples of increasing complexity, providing an accurate and efficient approach to extract spatiotemporal coherent structures from big data in a framework that scales with the intrinsic rank of the data, rather than the ambient measurement dimension. For this work we assume that the dynamics of the problem under consideration is evolving on a low-dimensional subspace that is well characterized by a fast decaying singular value spectrum

    Faster Linear Algebra for Distance Matrices

    Full text link
    The distance matrix of a dataset XX of nn points with respect to a distance function ff represents all pairwise distances between points in XX induced by ff. Due to their wide applicability, distance matrices and related families of matrices have been the focus of many recent algorithmic works. We continue this line of research and take a broad view of algorithm design for distance matrices with the goal of designing fast algorithms, which are specifically tailored for distance matrices, for fundamental linear algebraic primitives. Our results include efficient algorithms for computing matrix-vector products for a wide class of distance matrices, such as the 1\ell_1 metric for which we get a linear runtime, as well as an Ω(n2)\Omega(n^2) lower bound for any algorithm which computes a matrix-vector product for the \ell_{\infty} case, showing a separation between the 1\ell_1 and the \ell_{\infty} metrics. Our upper bound results, in conjunction with recent works on the matrix-vector query model, have many further downstream applications, including the fastest algorithm for computing a relative error low-rank approximation for the distance matrix induced by 1\ell_1 and 22\ell_2^2 functions and the fastest algorithm for computing an additive error low-rank approximation for the 2\ell_2 metric, in addition to applications for fast matrix multiplication among others. We also give algorithms for constructing distance matrices and show that one can construct an approximate 2\ell_2 distance matrix in time faster than the bound implied by the Johnson-Lindenstrauss lemma.Comment: Selected as Oral for NeurIPS 202

    Fast High-Dimensional Kernel Filtering

    Full text link
    The bilateral and nonlocal means filters are instances of kernel-based filters that are popularly used in image processing. It was recently shown that fast and accurate bilateral filtering of grayscale images can be performed using a low-rank approximation of the kernel matrix. More specifically, based on the eigendecomposition of the kernel matrix, the overall filtering was approximated using spatial convolutions, for which efficient algorithms are available. Unfortunately, this technique cannot be scaled to high-dimensional data such as color and hyperspectral images. This is simply because one needs to compute/store a large matrix and perform its eigendecomposition in this case. We show how this problem can be solved using the Nystr\"om method, which is generally used for approximating the eigendecomposition of large matrices. The resulting algorithm can also be used for nonlocal means filtering. We demonstrate the effectiveness of our proposal for bilateral and nonlocal means filtering of color and hyperspectral images. In particular, our method is shown to be competitive with state-of-the-art fast algorithms, and moreover it comes with a theoretical guarantee on the approximation error

    Weighted Low-Rank Approximation of Matrices:Some Analytical and Numerical Aspects

    Get PDF
    This dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms. Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained low-rank approximation of matrices: For a given matrix A=(A1  A2)A = (A_1\;A_2), find a low rank matrix X=(A1  X2)X = (A_1\;X_2) such that rank(X){\rm rank}(X) is less than rr, a prescribed bound, and AX\|A-X\| is small.~Motivated by the above formulation, we propose a weighted low-rank approximation problem that generalizes the constrained low-rank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix ARm×nA\in\mathbb{R}^{m\times n} solve: \begin{eqnarray*}\label{weighted problem} \min_{\substack{X}}\|\left(A-X\right)\odot W\|_F^2~{\rm subject~to~}{\rm rank}(X)\le r, \end{eqnarray*} where \odot denotes the pointwise multiplication and F\|\cdot\|_F is the Frobenius norm of matrices. In the first part, we study a special version of the above general weighted low-rank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem: \begin{eqnarray*}\label{weighted problem 1} \hat{X} &=& \arg \min_X \{\frac{1}{2}\|(A-X)W\|_F^2 +\tau\|X\|_\ast\}, \end{eqnarray*} where \|\cdot\|_* denotes the nuclear norm of XX.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as (W_1)_{ij}\to\infty, W_2=\mathbbm{1}, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained low-rank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted low-rank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted low-rank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to real-world problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted low-rank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4

    Compressed Absorbing Boundary Conditions for the Helmholtz Equation

    Full text link
    Absorbing layers are sometimes required to be impractically thick in order to offer an accurate approximation of an absorbing boundary condition for the Helmholtz equation in a heterogeneous medium. It is always possible to reduce an absorbing layer to an operator at the boundary by layer-stripping elimination of the exterior unknowns, but the linear algebra involved is costly. We propose to bypass the elimination procedure, and directly fit the surface-to-surface operator in compressed form from a few exterior Helmholtz solves with random Dirichlet data. We obtain a concise description of the absorbing boundary condition, with a complexity that grows slowly (often, logarithmically) in the frequency parameter. We then obtain a fast (nearly linear in the dimension of the matrix) algorithm for the application of the absorbing boundary condition using partitioned low rank matrices. The result, modulo a precomputation, is a fast and memory-efficient compression scheme of an absorbing boundary condition for the Helmholtz equation.Comment: PhD thesi

    Robust regularized singular value decomposition with application to mortality data

    Get PDF
    We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leave-one-row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS649 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore