104,344 research outputs found

    An Efficient Approach for Computing Optimal Low-Rank Regularized Inverse Matrices

    Full text link
    Standard regularization methods that are used to compute solutions to ill-posed inverse problems require knowledge of the forward model. In many real-life applications, the forward model is not known, but training data is readily available. In this paper, we develop a new framework that uses training data, as a substitute for knowledge of the forward model, to compute an optimal low-rank regularized inverse matrix directly, allowing for very fast computation of a regularized solution. We consider a statistical framework based on Bayes and empirical Bayes risk minimization to analyze theoretical properties of the problem. We propose an efficient rank update approach for computing an optimal low-rank regularized inverse matrix for various error measures. Numerical experiments demonstrate the benefits and potential applications of our approach to problems in signal and image processing.Comment: 24 pages, 11 figure

    Implicit QR for Companion-like Pencils

    Get PDF
    A fast implicit QR algorithm for eigenvalue computation of low rank corrections of unitary matrices is adjusted to work with matrix pencils arising from polynomial zerofinding problems . The modified QZ algorithm computes the generalized eigenvalues of certain NxN rank structured matrix pencils using O(N^2) ops and O(N) memory storage. Numerical experiments and comparisons confirm the effectiveness and the stability of the proposed method

    A distributed-memory package for dense Hierarchically Semi-Separable matrix computations using randomization

    Full text link
    We present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable representations (HSS). Such matrices appear in many applications, e.g., finite element methods, boundary element methods, etc. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, relies on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. This work is part of a more global effort, the STRUMPACK (STRUctured Matrices PACKage) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver

    Fast computation of the matrix exponential for a Toeplitz matrix

    Full text link
    The computation of the matrix exponential is a ubiquitous operation in numerical mathematics, and for a general, unstructured nĂ—nn\times n matrix it can be computed in O(n3)\mathcal{O}(n^3) operations. An interesting problem arises if the input matrix is a Toeplitz matrix, for example as the result of discretizing integral equations with a time invariant kernel. In this case it is not obvious how to take advantage of the Toeplitz structure, as the exponential of a Toeplitz matrix is, in general, not a Toeplitz matrix itself. The main contribution of this work are fast algorithms for the computation of the Toeplitz matrix exponential. The algorithms have provable quadratic complexity if the spectrum is real, or sectorial, or more generally, if the imaginary parts of the rightmost eigenvalues do not vary too much. They may be efficient even outside these spectral constraints. They are based on the scaling and squaring framework, and their analysis connects classical results from rational approximation theory to matrices of low displacement rank. As an example, the developed methods are applied to Merton's jump-diffusion model for option pricing
    • …
    corecore