531 research outputs found

    Manifold interpolation and model reduction

    Full text link
    One approach to parametric and adaptive model reduction is via the interpolation of orthogonal bases, subspaces or positive definite system matrices. In all these cases, the sampled inputs stem from matrix sets that feature a geometric structure and thus form so-called matrix manifolds. This work will be featured as a chapter in the upcoming Handbook on Model Order Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A. Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the numerical treatment of the most important matrix manifolds that arise in the context of model reduction. Moreover, the principal approaches to data interpolation and Taylor-like extrapolation on matrix manifolds are outlined and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model Order Reduction

    UTV Expansion Pack - Special-Purpose Rank Revealing Algorithms (version 1.0 for Matlab 6.5)

    Get PDF

    Rational minimax approximation via adaptive barycentric representations

    Get PDF
    Computing rational minimax approximations can be very challenging when there are singularities on or near the interval of approximation - precisely the case where rational functions outperform polynomials by a landslide. We show that far more robust algorithms than previously available can be developed by making use of rational barycentric representations whose support points are chosen in an adaptive fashion as the approximant is computed. Three variants of this barycentric strategy are all shown to be powerful: (1) a classical Remez algorithm, (2) a "AAA-Lawson" method of iteratively reweighted least-squares, and (3) a differential correction algorithm. Our preferred combination, implemented in the Chebfun MINIMAX code, is to use (2) in an initial phase and then switch to (1) for generically quadratic convergence. By such methods we can calculate approximations up to type (80, 80) of x|x| on [1,1][-1, 1] in standard 16-digit floating point arithmetic, a problem for which Varga, Ruttan, and Carpenter required 200-digit extended precision.Comment: 29 pages, 11 figure

    Accurate and Efficient Expression Evaluation and Linear Algebra

    Full text link
    We survey and unify recent results on the existence of accurate algorithms for evaluating multivariate polynomials, and more generally for accurate numerical linear algebra with structured matrices. By "accurate" we mean that the computed answer has relative error less than 1, i.e., has some correct leading digits. We also address efficiency, by which we mean algorithms that run in polynomial time in the size of the input. Our results will depend strongly on the model of arithmetic: Most of our results will use the so-called Traditional Model (TM). We give a set of necessary and sufficient conditions to decide whether a high accuracy algorithm exists in the TM, and describe progress toward a decision procedure that will take any problem and provide either a high accuracy algorithm or a proof that none exists. When no accurate algorithm exists in the TM, it is natural to extend the set of available accurate operations by a library of additional operations, such as x+y+zx+y+z, dot products, or indeed any enumerable set which could then be used to build further accurate algorithms. We show how our accurate algorithms and decision procedure for finding them extend to this case. Finally, we address other models of arithmetic, and the relationship between (im)possibility in the TM and (in)efficient algorithms operating on numbers represented as bit strings.Comment: 49 pages, 6 figures, 1 tabl

    A Jacobi-Davidson type method for the product eigenvalue problem

    Get PDF

    Computing the singular value decomposition with high relative accuracy

    Get PDF
    AbstractWe analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, which in general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as finite element problems and quantum mechanics, it is the smallest singular values that have physical meaning, and should be determined accurately by the data. Many recent papers have identified special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite different, motivating us to seek a common perturbation theory and common algorithm. We provide these in this paper, and show that high relative accuracy is possible in many new cases as well. The briefest way to describe our results is that we can compute the SVD of G to high relative accuracy provided we can accurately factor G=XDYT where D is diagonal and X and Y are any well-conditioned matrices; furthermore, the LDU factorization frequently does the job. We provide many examples of matrix classes permitting such an LDU decomposition
    corecore