14,323 research outputs found

    Fast linear algebra is stable

    Full text link
    In an earlier paper, we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of nn-by-nn matrices can be done by any algorithm in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0, then it can be done stably in O(nω+η)O(n^{\omega + \eta}) operations for any η>0\eta > 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear equation solving, matrix inversion, solving least squares problems, (generalized) eigenvalue problems and the singular value decomposition can also be done stably (in a normwise sense) in O(nω+η)O(n^{\omega + \eta}) operations.Comment: 26 pages; final version; to appear in Numerische Mathemati

    A condition number for the tensor rank decomposition

    Get PDF
    The tensor rank decomposition problem consists of recovering the unique set of parameters representing a robustly identifiable low-rank tensor when the coordinate representation of the tensor is presented as input. A condition number for this problem measuring the sensitivity of the parameters to an infinitesimal change to the tensor is introduced and analyzed. It is demonstrated that the absolute condition number coincides with the inverse of the least singular value of Terracini's matrix. Several basic properties of this condition number are investigated.Comment: 45 pages, 4 figure

    A Backward Stable Algorithm for Computing the CS Decomposition via the Polar Decomposition

    Full text link
    We introduce a backward stable algorithm for computing the CS decomposition of a partitioned 2n×n2n \times n matrix with orthonormal columns, or a rank-deficient partial isometry. The algorithm computes two n×nn \times n polar decompositions (which can be carried out in parallel) followed by an eigendecomposition of a judiciously crafted n×nn \times n Hermitian matrix. We prove that the algorithm is backward stable whenever the aforementioned decompositions are computed in a backward stable way. Since the polar decomposition and the symmetric eigendecomposition are highly amenable to parallelization, the algorithm inherits this feature. We illustrate this fact by invoking recently developed algorithms for the polar decomposition and symmetric eigendecomposition that leverage Zolotarev's best rational approximations of the sign function. Numerical examples demonstrate that the resulting algorithm for computing the CS decomposition enjoys excellent numerical stability

    Degenerate Kalman filter error covariances and their convergence onto the unstable subspace

    Get PDF
    The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters. In particular, as emphasized in the seminal work of Anna Trevisan and coauthors, the error covariance matrix is asymptotically supported by the unstable-neutral subspace only, i.e., it is spanned by the backward Lyapunov vectors with nonnegative exponents. This behavior is at the core of algorithms known as assimilation in the unstable subspace, although a formal proof was still missing. This paper provides the analytical proof of the convergence of the Kalman filter covariance matrix onto the unstable-neutral subspace when the dynamics and the observation operator are linear and when the dynamical model is error free, for any, possibly rank-deficient, initial error covariance matrix. The rate of convergence is provided as well. The derivation is based on an expression that explicitly relates the error covariances at an arbitrary time to the initial ones. It is also shown that if the unstable and neutral directions of the model are sufficiently observed and if the column space of the initial covariance matrix has a nonzero projection onto all of the forward Lyapunov vectors associated with the unstable and neutral directions of the dynamics, the covariance matrix of the Kalman filter collapses onto an asymptotic sequence which is independent of the initial covariances. Numerical results are also shown to illustrate and support the theoretical findings

    Minimizing Communication for Eigenproblems and the Singular Value Decomposition

    Full text link
    Algorithms have two costs: arithmetic and communication. The latter represents the cost of moving data, either between levels of a memory hierarchy, or between processors over a network. Communication often dominates arithmetic and represents a rapidly increasing proportion of the total cost, so we seek algorithms that minimize communication. In \cite{BDHS10} lower bounds were presented on the amount of communication required for essentially all O(n3)O(n^3)-like algorithms for linear algebra, including eigenvalue problems and the SVD. Conventional algorithms, including those currently implemented in (Sca)LAPACK, perform asymptotically more communication than these lower bounds require. In this paper we present parallel and sequential eigenvalue algorithms (for pencils, nonsymmetric matrices, and symmetric matrices) and SVD algorithms that do attain these lower bounds, and analyze their convergence and communication costs.Comment: 43 pages, 11 figure
    corecore