3,369 research outputs found

    A Backward Stable Algorithm for Computing the CS Decomposition via the Polar Decomposition

    Full text link
    We introduce a backward stable algorithm for computing the CS decomposition of a partitioned 2n×n2n \times n matrix with orthonormal columns, or a rank-deficient partial isometry. The algorithm computes two n×nn \times n polar decompositions (which can be carried out in parallel) followed by an eigendecomposition of a judiciously crafted n×nn \times n Hermitian matrix. We prove that the algorithm is backward stable whenever the aforementioned decompositions are computed in a backward stable way. Since the polar decomposition and the symmetric eigendecomposition are highly amenable to parallelization, the algorithm inherits this feature. We illustrate this fact by invoking recently developed algorithms for the polar decomposition and symmetric eigendecomposition that leverage Zolotarev's best rational approximations of the sign function. Numerical examples demonstrate that the resulting algorithm for computing the CS decomposition enjoys excellent numerical stability

    Maxwell's Theory of Solid Angle and the Construction of Knotted Fields

    Get PDF
    We provide a systematic description of the solid angle function as a means of constructing a knotted field for any curve or link in R3\mathbb{R}^3. This is a purely geometric construction in which all of the properties of the entire knotted field derive from the geometry of the curve, and from projective and spherical geometry. We emphasise a fundamental homotopy formula as unifying different formulae for computing the solid angle. The solid angle induces a natural framing of the curve, which we show is related to its writhe and use to characterise the local structure in a neighborhood of the knot. Finally, we discuss computational implementation of the formulae derived, with C code provided, and give illustrations for how the solid angle may be used to give explicit constructions of knotted scroll waves in excitable media and knotted director fields around disclination lines in nematic liquid crystals.Comment: 20 pages, 9 figure

    QR-Factorization Algorithm for Computed Tomography (CT): Comparison With FDK and Conjugate Gradient (CG) Algorithms

    Full text link
    [EN] Even though QR-factorization of the system matrix for tomographic devices has been already used for medical imaging, to date, no satisfactory solution has been found for solving large linear systems, such as those used in computed tomography (CT) (in the order of 106 equations). In CT, the Feldkamp, Davis, and Kress back projection algorithm (FDK) and iterative methods like conjugate gradient (CG) are the standard methods used for image reconstruction. As the image reconstruction problem can be modeled by a large linear system of equations, QR-factorization of the system matrix could be used to solve this system. Current advances in computer science enable the use of direct methods for solving such a large linear system. The QR-factorization is a numerically stable direct method for solving linear systems of equations, which is beginning to emerge as an alternative to traditional methods, bringing together the best from traditional methods. QR-factorization was chosen because the core of the algorithm, from the computational cost point of view, is precalculated and stored only once for a given CT system, and from then on, each image reconstruction only involves a backward substitution process and the product of a vector by a matrix. Image quality assessment was performed comparing contrast to noise ratio and noise power spectrum; performances regarding sharpness were evaluated by the reconstruction of small structures using data measured from a small animal 3-D CT. Comparisons of QR-factorization with FDK and CG methods show that QR-factorization is able to reconstruct more detailed images for a fixed voxel size.This work was supported by the Spanish Government under Grant TEC2016-79884-C2 and Grant RTC-2016-5186-1.Rodríguez-Álvarez, M.; Sánchez, F.; Soriano Asensi, A.; Moliner Martínez, L.; Sánchez Góez, S.; Benlloch Baviera, JM. (2018). QR-Factorization Algorithm for Computed Tomography (CT): Comparison With FDK and Conjugate Gradient (CG) Algorithms. IEEE Transactions on Radiation and Plasma Medical Sciences. 2(5):459-469. https://doi.org/10.1109/TRPMS.2018.2843803S4594692

    Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time

    Full text link
    We exhibit a randomized algorithm which given a square n×nn\times n complex matrix AA with A1\|A\| \le 1 and δ>0\delta>0, computes with high probability invertible VV and diagonal DD such that AVDV1δ\|A-VDV^{-1}\|\le \delta and VV1O(n2.5/δ)\|V\|\|V^{-1}\| \le O(n^{2.5}/\delta) in O(TMM(n)log2(n/δ))O(T_{MM}\>(n)\log^2(n/\delta)) arithmetic operations on a floating point machine with O(log4(n/δ)logn)O(\log^4(n/\delta)\log n) bits of precision. Here TMM(n)T_{MM}\>(n) is the number of arithmetic operations required to multiply two n×nn\times n complex matrices numerically stably, with TMM(n)=O(nω+η)T_{MM}\,\,(n)=O(n^{\omega+\eta}\>\>) for every η>0\eta>0, where ω\omega is the exponent of matrix multiplication. The algorithm is a variant of the spectral bisection algorithm in numerical linear algebra (Beavers and Denman, 1974). This running time is optimal up to polylogarithmic factors, in the sense that verifying that a given similarity diagonalizes a matrix requires at least matrix multiplication time. It significantly improves best previously provable running times of O(n10/δ2)O(n^{10}/\delta^2) arithmetic operations for diagonalization of general matrices (Armentano et al., 2018), and (w.r.t. dependence on nn) O(n3)O(n^3) arithmetic operations for Hermitian matrices (Parlett, 1998). The proof rests on two new ingredients. (1) We show that adding a small complex Gaussian perturbation to any matrix splits its pseudospectrum into nn small well-separated components. This implies that the eigenvalues of the perturbation have a large minimum gap, a property of independent interest in random matrix theory. (2) We rigorously analyze Roberts' Newton iteration method for computing the matrix sign function in finite arithmetic, itself an open problem in numerical analysis since at least 1986. This is achieved by controlling the evolution the iterates' pseudospectra using a carefully chosen sequence of shrinking contour integrals in the complex plane.Comment: 78 pages, 3 figures, comments welcome. Slightly edited intro from previous version + explicit statement of forward error Theorem (Corolary 1.7). Minor corrections mad

    Refresher course in maths and a project on numerical modeling done in twos

    Get PDF
    These lecture notes accompany a refresher course in applied mathematics with a focus on numerical concepts (Part I), numerical linear algebra (Part II), numerical analysis, Fourier series and Fourier transforms (Part III), and differential equations (Part IV). Several numerical projects for group work are provided in Part V. In these projects, the tasks are threefold: mathematical modeling, algorithmic design, and implementation. Therein, it is important to draw interpretations of the obtained results and provide measures (Parts I-IV) how to build confidence into numerical findings such intuition, error analysis, convergence analysis, and comparison to manufactured solutions. Both authors have been jointly teaching over several years this class and bring in a unique mixture of their respective teaching and research fields
    corecore