173 research outputs found

    Optimally packed chains of bulges in multishift QR algorithms

    Full text link

    Analysis of a Classical Matrix Preconditioning Algorithm

    Get PDF
    We study a classical iterative algorithm for balancing matrices in the LL_\infty norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett \& Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n×nn\times n (real or complex) input matrix~AA, a natural variant of the algorithm converges in O(n3log(nρ/ε))O(n^3\log(n\rho/\varepsilon)) elementary balancing operations, where ρ\rho measures the initial imbalance of~AA and ε\varepsilon is the target imbalance of the output matrix. (The imbalance of~AA is maxilog(aiout/aiin)\max_i |\log(a_i^{\text{out}}/a_i^{\text{in}})|, where aiout,aiina_i^{\text{out}},a_i^{\text{in}} are the maximum entries in magnitude in the iith row and column respectively.) This bound is tight up to the logn\log n factor. A balancing operation scales the iith row and column so that their maximum entries are equal, and requires O(m/n)O(m/n) arithmetic operations on average, where mm is the number of non-zero elements in~AA. Thus the running time of the iterative algorithm is O~(n2m)\tilde{O}(n^2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.Comment: The previous version (1) (see also STOC'15) handled UB ("unique balance") input matrices. In this version (2) we extend the work to handle all input matrice

    Hierarchical adaptive low-rank format with applications to discretized partial differential equations

    Get PDF
    A novel framework for hierarchical low-rank matrices is proposed that combines an adaptive hierarchical partitioning of the matrix with low-rank approximation. One typical application is the approximation of discretized functions on rectangular domains; the flexibility of the format makes it possible to deal with functions that feature singularities in small, localized regions. To deal with time evolution and relocation of singularities, the partitioning can be dynamically adjusted based on features of the underlying data. Our format can be leveraged to efficiently solve linear systems with Kronecker product structure, as they arise from discretized partial differential equations (PDEs). For this purpose, these linear systems are rephrased as linear matrix equations and a recursive solver is derived from low-rank updates of such equations. We demonstrate the effectiveness of our framework for stationary and time-dependent, linear and nonlinear PDEs, including the Burgers' and Allen-Cahn equations

    A block Newton method for nonlinear eigenvalue problems

    Get PDF
    We consider matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. One of the most fundamental differences from the linear case is that distinct eigenvalues may have linearly dependent eigenvectors or even share the same eigenvector. This has been a severe hindrance in the development of general numerical schemes for computing several eigenvalues of a nonlinear eigenvalue problem, either simultaneously or subsequently. The purpose of this work is to show that the concept of invariant pairs offers a way of representing eigenvalues and eigenvectors that is insensitive to this phenomenon. To demonstrate the use of this concept in the development of numerical methods, we have developed a novel block Newton method for computing such invariant pairs. Algorithmic aspects of this method are considered and a few academic examples demonstrate its viability. © Springer-Verlag 2009

    Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing

    Get PDF
    Treating high dimensionality is one of the main challenges in the development of computational methods for solving problems arising in finance, where tasks such as pricing, calibration, and risk assessment need to be performed accurately and in real-time. Among the growing literature addressing this problem, Gass et al. [Finance Stoch., 22 (2018), pp. 701--731] propose a complexity reduction technique for parametric option pricing based on Chebyshev interpolation. As the number of parameters increases, however, this method is affected by the curse of dimensionality. In this article, we extend this approach to treat high-dimensional problems: Additionally, exploiting low-rank structures allows us to consider parameter spaces of high dimensions. The core of our method is to express the tensorized interpolation in the tensor train format and to develop an efficient way, based on tensor completion, to approximate the interpolation coefficients. We apply the new method to two model problems: American option pricing in the Heston model and European basket option pricing in the multidimensional Black--Scholes model. In these examples, we treat parameter spaces of dimensions up to 25. The numerical results confirm the low-rank structure of these problems and the effectiveness of our method compared to advanced techniques

    Compress-and-Restart Block Krylov Subspace Methods for Sylvester Matrix Equations

    No full text
    Block Krylov subspace methods (KSMs) comprise building blocks in many state-of-the-art solvers for large-scale matrix equations as they arise, for example, from the discretization of partial differential equations. While extended and rational block Krylov subspace methods provide a major reduction in iteration counts over polynomial block KSMs, they also require reliable solvers for the coefficient matrices, and these solvers are often iterative methods themselves. It is not hard to devise scenarios in which the available memory, and consequently the dimension of the Krylov subspace, is limited. In such scenarios for linear systems and eigenvalue problems, restarting is a well-explored technique for mitigating memory constraints. In this work, such restarting techniques are applied to polynomial KSMs for matrix equations with a compression step to control the growing rank of the residual. An error analysis is also performed, leading to heuristics for dynamically adjusting the basis size in each restart cycle. A panel of numerical experiments demonstrates the effectiveness of the new method with respect to extended block KSMs

    Perturbation bounds for isotropic invariant subspaces of skew-Hamiltonian matrices

    Get PDF
    We investigate the behavior of isotropic invariant subspaces of skew-Hamiltonian matrices under structured perturbations. It is shown that finding a nearby subspace is equivalent to solving a certain quadratic matrix equation. This connection is used to derive meaningful error bounds and condition numbers that can be used to judge the quality of invariant subspaces computed by strongly backward stable eigensolvers

    A periodic Krylov-Schur algorithm for large matrix products

    Get PDF
    Stewart's recently introduced Krylov-Schur algorithm is a modification of the implicitly restarted Arnoldi algorithm which employs reordered Schur decompositions to perform restarts and deflations in a numerically reliable manner. This paper describes a variant of the Krylov-Schur algorithm suitable for addressing eigenvalue problems associated with products of large and sparse matrices. It performs restarts and deflations via reordered periodic Schur decompositions and, by taking the product structure into account, it is capable of achieving qualitatively better approximations to eigenvalues of small magnitude

    Block algorithms for orthogonal symplectic factorizations

    Get PDF
    On the basis of a new WY-like representation block algorithms for orthogonal symplectic matrix factorizations are presented. Special emphasis is placed on symplectic QR and URV factorizations. The block variants mainly use level 3 (matrix-matrix) operations that permit data reuse in the higher levels of a memory hierarchy. Timing results show that our new algorithms outperform standard algorithms by a factor 3-4 for sufficiently large problems

    Deflation in Krylov subspace methods and distance to uncontrollability

    Get PDF
    The task of extracting from a Krylov decomposition the approximation to an eigenpair that yields the smallest backward error can be phrased as finding the smallest perturbation which makes an associated matrix pair uncontrollable. Exploiting this relationship, we propose a new deflation criterion, which potentially admits earlier deflations than standard deflation criteria. Along these lines, a new deflation procedure for shift-and-invert Krylov methods is developed. Numerical experiments demonstrate the merits and limitations of this approach. © 2007 Università degli Studi di Ferrara
    corecore