38 research outputs found

    Computation- and Space-Efficient Implementation of SSA

    Full text link
    The computational complexity of different steps of the basic SSA is discussed. It is shown that the use of the general-purpose "blackbox" routines (e.g. found in packages like LAPACK) leads to huge waste of time resources since the special Hankel structure of the trajectory matrix is not taken into account. We outline several state-of-the-art algorithms (for example, Lanczos-based truncated SVD) which can be modified to exploit the structure of the trajectory matrix. The key components here are hankel matrix-vector multiplication and hankelization operator. We show that both can be computed efficiently by the means of Fast Fourier Transform. The use of these methods yields the reduction of the worst-case computational complexity from O(N^3) to O(k N log(N)), where N is series length and k is the number of eigentriples desired.Comment: 27 pages, 8 figure

    Deflation for the off-diagonal block in symmetric saddle point systems

    Full text link
    Deflation techniques are typically used to shift isolated clusters of small eigenvalues in order to obtain a tighter distribution and a smaller condition number. Such changes induce a positive effect in the convergence behavior of Krylov subspace methods, which are among the most popular iterative solvers for large sparse linear systems. We develop a deflation strategy for symmetric saddle point matrices by taking advantage of their underlying block structure. The vectors used for deflation come from an elliptic singular value decomposition relying on the generalized Golub-Kahan bidiagonalization process. The block targeted by deflation is the off-diagonal one since it features a problematic singular value distribution for certain applications. One example is the Stokes flow in elongated channels, where the off-diagonal block has several small, isolated singular values, depending on the length of the channel. Applying deflation to specific parts of the saddle point system is important when using solvers such as CRAIG, which operates on individual blocks rather than the whole system. The theory is developed by extending the existing framework for deflating square matrices before applying a Krylov subspace method like MINRES. Numerical experiments confirm the merits of our strategy and lead to interesting questions about using approximate vectors for deflation.Comment: 26 pages, 12 figure

    Numerické metody pro řešení diskrétních inverzních úloh

    Get PDF
    Název práce: Numerické metody pro řešení diskrétních inverzních úloh Autor: Marie Kubínová Katedra: Katedra numerické matematiky Vedoucí disertační práce: RNDr. Iveta Hnětynková, Ph.D., Katedra numerické matematiky Abstrakt: Inverzní úlohy představují širokou skupinu problémů rekonstrukce neznámých veličin z naměřených dat, přičemž společným rysem těchto problémů je vysoká citlivost řešení na změny v datech. Úkolem numerických metod je zkonstruovat výpočetně nenáročným způsobem aproximaci řešení a zároveň pot- lačit vliv nepřesností v datech, tzv. šumu, který je vždy přítomen. Vlastnosti šumu a jeho chování v regularizačních metodách hrají klíčovou roli při konstruk- ci a analýze těchto metod. Tato práce se zaměřuje na některé aspekty řešení diskrétních inverzních úloh, a to konkrétně: na propagaci šumu v iteračních metodách a jeho reprezentaci v příslušných residuích, včetně studia vlivu arit- metiky s konečnou přesností, na odhad hladiny šumu a na řešení problémů s daty zatíženými šumem z různých zdrojů. Klíčová slova: diskrétní inverzní úlohy, iterační metody, odhadování šumu, smíšený šum, aritmetika s konečnou přesností - v -Title: Numerical Methods in Discrete Inverse Problems Author: Marie Kubínová Department: Department of Numerical Mathematics Supervisor: RNDr. Iveta Hnětynková, Ph.D., Department of Numerical Mathe- matics Abstract: Inverse problems represent a broad class of problems of reconstruct- ing unknown quantities from measured data. A common characteristic of these problems is high sensitivity of the solution to perturbations in the data. The aim of numerical methods is to approximate the solution in a computationally efficient way while suppressing the influence of inaccuracies in the data, referred to as noise, that are always present. Properties of noise and its behavior in reg- ularization methods play crucial role in the design and analysis of the methods. The thesis focuses on several aspects of solution of discrete inverse problems, in particular: on propagation of noise in iterative methods and its representation in the corresponding residuals, including the study of influence of finite-precision computation, on estimating the noise level, and on solving problems with data polluted with noise coming from various sources. Keywords: discrete inverse problems, iterative solvers, noise estimation, mixed noise, finite-precision arithmetic - iii -Katedra numerické matematikyDepartment of Numerical MathematicsMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    The joint bidiagonalization of a matrix pair with inaccurate inner iterations

    Full text link
    The joint bidiagonalization (JBD) process iteratively reduces a matrix pair {A,L}\{A,L\} to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of {A,L}\{A,L\}. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of (AT,LT)T(A^T,L^T)^T while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results

    Stability of Two Direct Methods for Bidiagonalization and Partial Least Squares

    Full text link

    A rounding error analysis of the joint bidiagonalization process with applications to the GSVD computation

    Full text link
    The joint bidiagonalization(JBD) process is a useful algorithm for approximating some extreme generalized singular values and vectors of a large sparse or structured matrix pair {A,L\}. We present a rounding error analysis of the JBD process, which establishes connections between the JBD process and the two joint Lanczos bidiagonalizations. We investigate the loss of orthogonality of the computed Lanczos vectors. Based on the results of rounding error analysis, we investigate the convergence and accuracy of the approximate generalized singular values and vectors of {A,L\}. The results show that semiorthogonality of the Lanczos vectors is enough to guarantee the accuracy and convergence of the approximate generalized singular values, which is a guidance for designing an efficient semiorthogonalization strategy for the JBD process. We also investigate the residual norm appeared in the computation of the generalized singular value decomposition (GSVD), and show that its upper bound can be used as a stopping criterion.Comment: 28 pages, 9 figure

    Deterministic algorithms for the low rank approximation of matrices

    Get PDF
    Cours sur invitation donné lors de l'Action Nationale de Formation CNRS intitulée: "Réduction de la dimension dans la fouille de données massives : enjeux, méthodes et outils pour le calcul.
    corecore