21 research outputs found

    Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems

    Full text link
    We propose a block version of the randomized Gram-Schmidt process for computing a QR factorization of a matrix. Our algorithm inherits the major properties of its single-vector analogue from [Balabanov and Grigori, 2020] such as higher efficiency than the classical Gram-Schmidt algorithm and stability of the modified Gram-Schmidt algorithm, which can be refined even further by using multi-precision arithmetic. As in [Balabanov and Grigori, 2020], our algorithm has an advantage of performing standard high-dimensional operations, that define the overall computational cost, with a unit roundoff independent of the dominant dimension of the matrix. This unique feature makes the methodology especially useful for large-scale problems computed on low-precision arithmetic architectures. Block algorithms are advantageous in terms of performance as they are mainly based on cache-friendly matrix-wise operations, and can reduce communication cost in high-performance computing. The block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of Krylov basis, which in its turn is used in GMRES and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on the proposed randomized Gram-Schmidt algorithm, and validate them on nontrivial numerical examples

    Randomized Orthogonal Projection Methods for Krylov Subspace Solvers

    Full text link
    Randomized orthogonal projection methods (ROPMs) can be used to speed up the computation of Krylov subspace methods in various contexts. Through a theoretical and numerical investigation, we establish that these methods produce quasi-optimal approximations over the Krylov subspace. Our numerical experiments outline the convergence of ROPMs for all matrices in our test set, with occasional spikes, but overall with a convergence rate similar to that of standard OPMs

    Randomized block Gram-Schmidt process for solution of linear systems and eigenvalue problems

    Get PDF
    We propose a block version of the randomized Gram-Schmidt process for computing a QR factorization of a matrix. Our algorithm inherits the major properties of its single-vector analogue from [Balabanov and Grigori, 2020] such as higher efficiency than the classical Gram-Schmidt algorithm and stability of the modified Gram-Schmidt algorithm, which can be refined even further by using multi-precision arithmetic. As in [Balabanov and Grigori, 2020], our algorithm has an advantage of performing standard high-dimensional operations, that define the overall computational cost, with a unit roundoff independent of the dominant dimension of the matrix. This unique feature makes the methodology especially useful for large-scale problems computed on low-precision arithmetic architectures. Block algorithms are advantageous in terms of performance as they are mainly based on cache-friendly matrix-wise operations, and can reduce communication cost in high-performance computing. The block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of Krylov basis, which in its turn is used in GMRES and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on the proposed randomized Gram-Schmidt algorithm, and validate them on nontrivial numerical examples

    CholeskyQR with Randomization and Pivoting for Tall Matrices (CQRRPT)

    Full text link
    This paper develops and analyzes a new algorithm for QR decomposition with column pivoting (QRCP) of rectangular matrices with large row counts. The algorithm combines methods from randomized numerical linear algebra in a particularly careful way in order to accelerate both pivot decisions for the input matrix and the process of decomposing the pivoted matrix into the QR form. The source of the latter acceleration is a use of randomized preconditioning and CholeskyQR. Comprehensive analysis is provided in both exact and finite-precision arithmetic to characterize the algorithm's rank-revealing properties and its numerical stability granted probabilistic assumptions of the sketching operator. An implementation of the proposed algorithm is described and made available inside the open-source RandLAPACK library, which itself relies on RandBLAS - also available in open-source format. Experiments with this implementation on an Intel Xeon Gold 6248R CPU demonstrate order-of-magnitude speedups relative to LAPACK's standard function for QRCP, and comparable performance to a specialized algorithm for unpivoted QR of tall matrices, which lacks the strong rank-revealing properties of the proposed method.Comment: v1: 26 pages in the body, 10 pages in the appendices, 10 figures. v2: performance experiments now use a larger sketch size for CQRRP

    Block subsampled randomized Hadamard transform for low-rank approximation on distributed architectures

    Get PDF
    This article introduces a novel structured random matrix composed blockwise from subsampled randomized Hadamard transforms (SRHTs). The block SRHT is expected to outperform well-known dimension reduction maps, including SRHT and Gaussian matrices, on distributed architectures with not too many cores compared to the dimension. We prove that a block SRHT with enough rows is an oblivious subspace embedding, i.e., an approximate isometry for an arbitrary low-dimensional subspace with high probability. Our estimate of the required number of rows is similar to that of the standard SRHT. This suggests that the two transforms should provide the same accuracy of approximation in the algorithms. The block SRHT can be readily incorporated into randomized methods, for instance to compute a low-rank approximation of a large-scale matrix. For completeness, we revisit some common randomized approaches for this problem such as Randomized Singular Value Decomposition and Nyström approximation, with a discussion of their accuracy and implementation on distributed architectures

    A review on magneto-optical ceramics for Faraday isolators

    Get PDF
    As a promising magneto-optical (MO) material applied in Faraday isolators, magneto-optical ceramics possess excellent comprehensive properties and have attracted much attention these years. Herein, we review the fabrication and properties of magneto-optical ceramics including garnet, sesquioxide, and A2B2O7 ceramics. Some of the ceramics have been proved to possess applicable performance, while further studies are still needed for most of the magneto-optical ceramics. Aiming at the application for isolators, the research status, existing problems, and development trends of magneto-optical ceramics are shown and discussed in this review

    Randomized linear algebra for model order reduction

    Get PDF
    Solutions to high-dimensional parameter-dependent problems are in great demand in the contemporary applied science and engineering. The standard approximation methods for parametric equations can require computational resources that are exponential in the dimension of the parameter space, which is typically refereed to as the curse of dimensionality. To break the curse of dimensionality one has to appeal to nonlinear methods that exploit the structure of the solution map, such as projection-based model order reduction methods. This thesis proposes novel methods based on randomized linear algebra to enhance the efficiency and stability of projection-based model order reduction methods for solving parameter-dependent equations. Our methodology relies on random projections (or random sketching). Instead of operating with high-dimensional vectors we first efficiently project them into a low-dimensional space. The reduced model is then efficiently and numerically stably constructed from the projections of the reduced approximation space and the spaces of associated residuals. Our approach allows drastic computational savings in basically any modern computational architecture. For instance, it can reduce the number of flops and memory consumption and improve the efficiency of the data flow (characterized by scalability or communication costs). It can be employed for improving the efficiency and numerical stability of classical Galerkin and minimal residual methods. It can also be used for the efficient estimation of the error, and post-processing of the solution of the reduced order model. Furthermore, random sketching makes computationally feasible a dictionary-based approximation method, where for each parameter value the solution is approximated in a subspace with a basis selected from a dictionary of vectors. We also address the efficient construction (using random sketching) of parameter-dependent preconditioners that can be used to improve the quality of Galerkin projections or for effective error certification for problems with ill-conditioned operators. For all proposed methods we provide precise conditions on the random sketch to guarantee accurate and stable estimations with a user-specified probability of success. A priori estimates to determine the sizes of the random matrices are provided as well as a more effective adaptive procedure based on a posteriori estimates.Cette thĂšse introduit des nouvelles approches basĂ©es sur l’algĂšbre linĂ©aire alĂ©atoire pour amĂ©liorer l’efficacitĂ© et la stabilitĂ© des mĂ©thodes de rĂ©duction de modĂšles basĂ©es sur des projections pour la rĂ©solution d’équations dĂ©pendant de paramĂštres. Notre mĂ©thodologie repose sur des techniques de projections alĂ©atoires ("random sketching") qui consistent Ă  projeter des vecteurs de grande dimension dans un espace de faible dimension. Un modĂšle rĂ©duit est ainsi construit de maniĂšre efficace et numĂ©riquement stable Ă  partir de projections alĂ©atoires de l’espace d’approximation rĂ©duit et des espaces des rĂ©sidus associĂ©s. Notre approche permet de rĂ©aliser des Ă©conomies de calcul considĂ©rables dans pratiquement toutes les architectures de calcul modernes. Par exemple, elle peut rĂ©duire le nombre de flops et la consommation de mĂ©moire et amĂ©liorer l’efficacitĂ© du flux de donnĂ©es (caractĂ©risĂ© par l’extensibilitĂ© ou le coĂ»t de communication). Elle peut ĂȘtre utilisĂ©e pour amĂ©liorer l’efficacitĂ© et la stabilitĂ© des mĂ©thodes de projection de Galerkin ou par minimisation de rĂ©sidu. Elle peut Ă©galement ĂȘtre utilisĂ©e pour estimer efficacement l’erreur et post-traiter la solution du modĂšle rĂ©duit. De plus, l’approche par projection alĂ©atoire rend viable numĂ©riquement une mĂ©thode d’approximation basĂ©e sur un dictionnaire, oĂč pour chaque valeur de paramĂštre, la solution est approchĂ©e dans un sous-espace avec une base sĂ©lectionnĂ©e dans le dictionnaire. Nous abordons Ă©galement la construction efficace (par projections alĂ©atoires) de prĂ©conditionneurs dĂ©pendant de paramĂštres, qui peuvent ĂȘtre utilisĂ©s pour amĂ©liorer la qualitĂ© des projections de Galerkin ou des estimateurs d’erreur pour des problĂšmes Ă  opĂ©rateurs mal conditionnĂ©s. Pour toutes les mĂ©thodes proposĂ©es, nous fournissons des conditions prĂ©cises sur les projections alĂ©atoires pour garantir des estimations prĂ©cises et stables avec une probabilitĂ© de succĂšs spĂ©cifiĂ©e par l’utilisateur. Pour dĂ©terminer la taille des matrices alĂ©atoires, nous fournissons des bornes a priori ainsi qu’une procĂ©dure adaptative plus efficace basĂ©e sur des estimations a posterior

    AlgĂšbre linĂ©aire randomisĂ©e pour la rĂ©duction de l’ordre des modĂšles

    No full text
    Solutions to high-dimensional parameter-dependent problems are in great demand in the contemporary applied science and engineering. The standard approximation methods for parametric equations can require computational resources that are exponential in the dimension of the parameter space, which is typically referred to as the curse of dimensionality. To break the curse of dimensionality one has to appeal to nonlinear methods that exploit the structure of the solution map, such as projection-based model order reduction methods. This thesis proposes novel methods based on randomized linear algebra to enhance the efficiency and stability of projection-based model order reduction methods for solving parameter-dependent equations. Our methodology relies on random projections (or random sketching). Instead of operating with high-dimensional vectors we first efficiently project them into a low-dimensional space. The reduced model is then efficiently and numerically stably constructed from the projections of the reduced approximation space and the spaces of associated residuals. Our approach allows drastic computational savings in basically any modern computational architecture. For instance, it can reduce the number of flops and memory consumption and improve the efficiency of the data flow (characterized by scalability or communication costs). It can be employed for improving the efficiency and numerical stability of classical Galerkin and minimal residual methods. It can also be used for the efficient estimation of the error, and post-processing of the solution of the reduced order model. Furthermore, random sketching makes computationally feasible a dictionary-based approximation method, where for each parameter value the solution is approximated in a subspace with a basis selected from a dictionary of vectors. We also address the efficient construction (using random sketching) of parameter-dependent preconditioners that can be used to improve the quality of Galerkin projections or for effective error certification for problems with ill-conditioned operators. For all proposed methods we provide precise conditions on the random sketch to guarantee accurate and stable estimations with a user-specified probability of success. A priori estimates to determine the sizes of the random matrices are provided as well as a more effective adaptive procedure based on a posteriori estimates.Cette thĂšse introduit des nouvelles approches basĂ©es sur l’algĂšbre linĂ©aire alĂ©atoire pour amĂ©liorer l’efficacitĂ© et la stabilitĂ© des mĂ©thodes de rĂ©duction de modĂšles basĂ©es sur des projections pour la rĂ©solution d’équations dĂ©pendant de paramĂštres. Notre mĂ©thodologie repose sur des techniques de projections alĂ©atoires ("random sketching") qui consistent Ă  projeter des vecteurs de grande dimension dans un espace de faible dimension. Un modĂšle rĂ©duit est ainsi construit de maniĂšre efficace et numĂ©riquement stable Ă  partir de projections alĂ©atoires de l’espace d’approximation rĂ©duit et des espaces des rĂ©sidus associĂ©s. Notre approche permet de rĂ©aliser des Ă©conomies de calcul considĂ©rables dans pratiquement toutes les architectures de calcul modernes. Par exemple, elle peut rĂ©duire le nombre de flops et la consommation de mĂ©moire et amĂ©liorer l’efficacitĂ© du flux de donnĂ©es (caractĂ©risĂ© par l’extensibilitĂ© ou le coĂ»t de communication). Elle peut ĂȘtre utilisĂ©e pour amĂ©liorer l’efficacitĂ© et la stabilitĂ© des mĂ©thodes de projection de Galerkin ou par minimisation de rĂ©sidu. Elle peut Ă©galement ĂȘtre utilisĂ©e pour estimer efficacement l’erreur et post-traiter la solution du modĂšle rĂ©duit. De plus, l’approche par projection alĂ©atoire rend viable numĂ©riquement une mĂ©thode d’approximation basĂ©e sur un dictionnaire, oĂč pour chaque valeur de paramĂštre, la solution est approchĂ©e dans un sous-espace avec une base sĂ©lectionnĂ©e dans le dictionnaire. Nous abordons Ă©galement la construction efficace (par projections alĂ©atoires) de prĂ©conditionneurs dĂ©pendant de paramĂštres, qui peuvent ĂȘtre utilisĂ©s pour amĂ©liorer la qualitĂ© des projections de Galerkin ou des estimateurs d’erreur pour des problĂšmes Ă  opĂ©rateurs mal conditionnĂ©s. Pour toutes les mĂ©thodes proposĂ©es, nous fournissons des conditions prĂ©cises sur les projections alĂ©atoires pour garantir des estimations prĂ©cises et stables avec une probabilitĂ© de succĂšs spĂ©cifiĂ©e par l’utilisateur. Pour dĂ©terminer la taille des matrices alĂ©atoires, nous fournissons des bornes a priori ainsi qu’une procĂ©dure adaptative plus efficace basĂ©e sur des estimations a posteriori

    Randomized Gram-Schmidt process with application to GMRES

    No full text
    The original manuscript with supplementary material. Submitted versionA randomized Gram-Schmidt algorithm is developed for orthonormalization of high-dimensional vectors or QR factorization. The proposed process can be less computationally expensive than the classical Gram-Schmidt process while being at least as numerically stable as the modified Gram-Schmidt process. Our approach is based on random sketching, which is a dimension reduction technique consisting in estimation of inner products of high-dimensional vectors by inner products of their small efficiently-computable random projections, so-called sketches. This allows to perform the projection step in Gram-Schmidt process on sketches rather than high-dimensional vectors with a minor computational cost. This also provides an ability to efficiently certify the output. The proposed Gram-Schmidt algorithm can provide computational cost reduction in any architecture. The benefit of random sketching can be amplified by exploiting multi-precision arithmetic. We provide stability analysis for multi-precision model with coarse unit roundoff for standard high-dimensional operations. Numerical stability is proven for the unit roundoff independent of the (high) dimension of the problem. The proposed Gram-Schmidt process can be applied to Arnoldi iteration and result in new Krylov subspace methods for solving high-dimensional systems of equations or eigenvalue problems. Among them we chose randomized GMRES method as a practical application of the methodology
    corecore