2,044 research outputs found

    Data adaptive rank-shaping methods for solving least squares problems

    Get PDF
    Includes bibliographical references.There are two types of problems in the theory of least squares signal processing: parameter estimation and signal extraction. Parameter estimation is called "inversion" and signal extraction is called "filtering." In this paper, we present a unified theory of rank shaping for solving overdetermined and underdetermined versions of these problems. We develop several data-dependent rank-shaping methods and evaluate their performance. Our key result is a data-adaptive Wiener filter that automatically adjusts its gains to accommodate realizations that are a priori unlikely. The adaptive filter dramatically outperforms the Wiener filter on atypical realizations and just slightly underperforms it on typical realizations. This is the most one can hope for in a data-adaptive filter.Supported by the Office of Naval Research, Mathematics Division, under contract No. N00014-89-J-1070 and by Bonneville Power Administration under Contract #DEBI7990BPO7346

    The covariance matrix for the solution vector of an equality-constrained least-squares problem

    Get PDF
    Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.

    Least squares problems involving generalized Kronecker products and application to bivariate polynomial regression

    Get PDF
    A method for solving least squares problems (A ⊗ Bi)x = b whose coefficient matrices have generalized Kronecker product structure is presented. It is based on the exploitation of the block structure of the Moore-Penrose inverse and the reflexive minimum norm g-inverse of the coefficient matrix, and on the QR method for solving least squares problems. Firstly, the general case where A is a rectangular matrix is considered, and then the special case where A is square is analyzed. This special case is applied to the problem of bivariate polynomial regression, in which the involved matrices are structured matrices (Vandermonde or Bernstein-Vandermonde matrices). In this context, the advantage of using the Bernstein basis instead of the monomial basis is shown. Numerical experiments illustrating the good behavior of the proposed algorithm are included.Ministerio de Economía y Competitivida

    Fast linear algebra is stable

    Full text link
    In an earlier paper, we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nn-by-nn matrices can be done by any algorithm in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0, then it can be done stably in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(nω+η)O(n^{\omega + \eta}) operations.Comment: 26 pages; final version; to appear in Numerische Mathemati

    SDLS: a Matlab package for solving conic least-squares problems

    Get PDF
    This document is an introduction to the Matlab package SDLS (Semi-Definite Least-Squares) for solving least-squares problems over convex symmetric cones. The package is shortly presented through the addressed problem, a sketch of the implemented algorithm, the syntax and calling sequences, a simple numerical example and some more advanced features. The implemented method consists in solving the dual problem with a quasi-Newton algorithm. We note that SDLS is not the most competitive implementation of this algorithm: efficient, robust, commercial implementations are available (contact the authors). Our main goal with this Matlab SDLS package is to provide a simple, user-friendly software for solving and experimenting with semidefinite least-squares problems. Up to our knowledge, no such freeware exists at this date

    Error analysis, perturbation theory and applications of the bidiagonal decomposition of rectangular totally-positive h-Bernstein-Vandermonde matrices

    Get PDF
    A fast and accurate algorithm to compute the bidiagonal decomposition of rectangular totally positive h-Bernstein-Vandermonde matrices is presented. The error analysis of the algorithm and the perturbation theory for the bidiagonal decomposition of totally positive h-Bernstein-Vandermonde matrices are addressed. The computation of this bidiagonal decomposition is used as the first step for the accurate and efficient computation of the singular values of rectangular totally positive h-Bernstein-Vandermonde matrices and for solving least squares problems whose coefficient matrices are such matrices.Agencia Estatal de InvestigaciĂł

    Using the QR Factorization to swiftly update least squares problems

    Get PDF
    In this paper we study how to update the solution of the linear system Ax = b after the matrix A is changed by addition or deletion of rows or columns. Studying the QR Factorization of the system, more specifically, the factorization created by the Householder reflection algorithm, we find that we can split the algorithm in two parts. The result from the first part is trivial to update and is the only dependency for calculating the second part. We find that not only can this save a considerable amount of time when solving least squares problems but the algorithm is also very easy to implement

    Performance Gains in Conjugate Gradient Computation with Linearly Connected GPU Multiprocessors

    Get PDF
    Conjugate gradient is an important iterative method used for solving least squares problems. It is compute-bound and generally involves only simple matrix computations. One would expect that we could fully parallelize such computation on the GPU architecture with multiple Stream Multiprocessors (SMs), each consisting of many SIMD processing units. While implementing a conjugate gradient method for compressive sensing signal reconstruction, we have noticed that large speed-up due to parallel processing is actually infeasible due to the high I/O cost between SMs and GPU global memory. WE have found that if SMs were linearly connected, we could gain a 15x speedup by loop unrolling. We conclude that adding these relatively inexpensive neighbor connections for SMs can significantly enhance the applicability of GPUs to a large class of similar matrix computations.Engineering and Applied Science
    • …
    corecore