959 research outputs found

    Accurate solution of structured least squares problems via rank-revealing decompositions

    Get PDF
    Least squares problems min(x) parallel to b - Ax parallel to(2) where the matrix A is an element of C-mXn (m >= n) has some particular structure arise frequently in applications. Polynomial data fitting is a well-known instance of problems that yield highly structured matrices, but many other examples exist. Very often, structured matrices have huge condition numbers kappa(2)(A) = parallel to A parallel to(2) parallel to A(dagger)parallel to(2) (A(dagger) is the Moore-Penrose pseudoinverse of A) and therefore standard algorithms fail to compute accurate minimum 2-norm solutions of least squares problems. In this work, we introduce a framework that allows us to compute minimum 2-norm solutions of many classes of structured least squares problems accurately, i.e., with errors parallel to(x) over cap (0) - x(0)parallel to(2)/parallel to x(0)parallel to(2) = O(u), where u is the unit roundoff, independently of the magnitude of kappa(2)(A) for most vectors b. The cost of these accurate computations is O(n(2)m) flops, i.e., roughly the same cost as standard algorithms for least squares problems. The approach in this work relies in computing first an accurate rank-revealing decomposition of A, an idea that has been widely used in recent decades to compute, for structured ill-conditioned matrices, singular value decompositions, eigenvalues, and eigenvectors in the Hermitian case and solutions of linear systems with high relative accuracy. In order to prove that accurate solutions are computed, a new multiplicative perturbation theory of the least squares problem is needed. The results presented in this paper are valid for both full rank and rank deficient problems and also in the case of underdetermined linear systems (m < n). Among other types of matrices, the new method applies to rectangular Cauchy, Vandermonde, and graded matrices, and detailed numerical tests for Cauchy matrices are presented.This work was supported by the Ministerio de Economía y Competitividad of Spain through grants MTM-2009-09281, MTM-2012-32542 (Ceballos, Dopico, and Molera) and MTM2010-18057 (Castro-González).Publicad

    Computing the singular value decomposition with high relative accuracy

    Get PDF
    AbstractWe analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, which in general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as finite element problems and quantum mechanics, it is the smallest singular values that have physical meaning, and should be determined accurately by the data. Many recent papers have identified special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite different, motivating us to seek a common perturbation theory and common algorithm. We provide these in this paper, and show that high relative accuracy is possible in many new cases as well. The briefest way to describe our results is that we can compute the SVD of G to high relative accuracy provided we can accurately factor G=XDYT where D is diagonal and X and Y are any well-conditioned matrices; furthermore, the LDU factorization frequently does the job. We provide many examples of matrix classes permitting such an LDU decomposition

    Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition

    Full text link
    Dynamic mode decomposition (DMD) provides a practical means of extracting insightful dynamical information from fluids datasets. Like any data processing technique, DMD's usefulness is limited by its ability to extract real and accurate dynamical features from noise-corrupted data. Here we show analytically that DMD is biased to sensor noise, and quantify how this bias depends on the size and noise level of the data. We present three modifications to DMD that can be used to remove this bias: (i) a direct correction of the identified bias using known noise properties, (ii) combining the results of performing DMD forwards and backwards in time, and (iii) a total least-squares-inspired algorithm. We discuss the relative merits of each algorithm, and demonstrate the performance of these modifications on a range of synthetic, numerical, and experimental datasets. We further compare our modified DMD algorithms with other variants proposed in recent literature

    Theory and computation of covariant Lyapunov vectors

    Full text link
    Lyapunov exponents are well-known characteristic numbers that describe growth rates of perturbations applied to a trajectory of a dynamical system in different state space directions. Covariant (or characteristic) Lyapunov vectors indicate these directions. Though the concept of these vectors has been known for a long time, they became practically computable only recently due to algorithms suggested by Ginelli et al. [Phys. Rev. Lett. 99, 2007, 130601] and by Wolfe and Samelson [Tellus 59A, 2007, 355]. In view of the great interest in covariant Lyapunov vectors and their wide range of potential applications, in this article we summarize the available information related to Lyapunov vectors and provide a detailed explanation of both the theoretical basics and numerical algorithms. We introduce the notion of adjoint covariant Lyapunov vectors. The angles between these vectors and the original covariant vectors are norm-independent and can be considered as characteristic numbers. Moreover, we present and study in detail an improved approach for computing covariant Lyapunov vectors. Also we describe, how one can test for hyperbolicity of chaotic dynamics without explicitly computing covariant vectors.Comment: 21 pages, 5 figure

    R-dimensional ESPRIT-type algorithms for strictly second-order non-circular sources and their performance analysis

    Full text link
    High-resolution parameter estimation algorithms designed to exploit the prior knowledge about incident signals from strictly second-order (SO) non-circular (NC) sources allow for a lower estimation error and can resolve twice as many sources. In this paper, we derive the R-D NC Standard ESPRIT and the R-D NC Unitary ESPRIT algorithms that provide a significantly better performance compared to their original versions for arbitrary source signals. They are applicable to shift-invariant R-D antenna arrays and do not require a centrosymmetric array structure. Moreover, we present a first-order asymptotic performance analysis of the proposed algorithms, which is based on the error in the signal subspace estimate arising from the noise perturbation. The derived expressions for the resulting parameter estimation error are explicit in the noise realizations and asymptotic in the effective signal-to-noise ratio (SNR), i.e., the results become exact for either high SNRs or a large sample size. We also provide mean squared error (MSE) expressions, where only the assumptions of a zero mean and finite SO moments of the noise are required, but no assumptions about its statistics are necessary. As a main result, we analytically prove that the asymptotic performance of both R-D NC ESPRIT-type algorithms is identical in the high effective SNR regime. Finally, a case study shows that no improvement from strictly non-circular sources can be achieved in the special case of a single source.Comment: accepted at IEEE Transactions on Signal Processing, 15 pages, 6 figure

    Perturbation splitting for more accurate eigenvalues

    Get PDF
    Let TT be a symmetric tridiagonal matrix with entries and eigenvalues of different magnitudes. For some TT, small entrywise relative perturbations induce small errors in the eigenvalues, independently of the size of the entries of the matrix; this is certainly true when the perturbed matrix can be written as T~=XTTX\widetilde{T}=X^{T}TX with small XTXI||X^{T}X-I||. Even if it is not possible to express in this way the perturbations in every entry of TT, much can be gained by doing so for as many as possible entries of larger magnitude. We propose a technique which consists of splitting multiplicative and additive perturbations to produce new error bounds which, for some matrices, are much sharper than the usual ones. Such bounds may be useful in the development of improved software for the tridiagonal eigenvalue problem, and we describe their role in the context of a mixed precision bisection-like procedure. Using the very same idea of splitting perturbations (multiplicative and additive), we show that when TT defines well its eigenvalues, the numerical values of the pivots in the usual decomposition TλI=LDLTT-\lambda I=LDL^{T} may be used to compute approximations with high relative precision.Fundação para a Ciência e Tecnologia (FCT) - POCI 201
    corecore