15 research outputs found

    Amélioration des solveurs multifrontaux à l'aide de représentations algébriques rang-faible par blocs

    Get PDF
    We consider the solution of large sparse linear systems by means of direct factorization based on a multifrontal approach. Although numerically robust and easy to use (it only needs algebraic information: the input matrix A and a right-hand side b, even if it can also digest preprocessing strategies based on geometric information), direct factorization methods are computationally intensive both in terms of memory and operations, which limits their scope on very large problems (matrices with up to few hundred millions of equations). This work focuses on exploiting low-rank approximations on multifrontal based direct methods to reduce both the memory footprints and the operation count, in sequential and distributed-memory environments, on a wide class of problems. We first survey the low-rank formats which have been previously developed to efficiently represent dense matrices and have been widely used to design fast solutions of partial differential equations, integral equations and eigenvalue problems. These formats are hierarchical (H and Hierarchically Semiseparable matrices are the most common ones) and have been (both theoretically and practically) shown to substantially decrease the memory and operation requirements for linear algebra computations. However, they impose many structural constraints which can limit their scope and efficiency, especially in the context of general purpose multifrontal solvers. We propose a flat format called Block Low-Rank (BLR) based on a natural blocking of the matrices and explain why it provides all the flexibility needed by a general purpose multifrontal solver in terms of numerical pivoting for stability and parallelism. We compare BLR format with other formats and show that BLR does not compromise much the memory and operation improvements achieved through low-rank approximations. A stability study shows that the approximations are well controlled by an explicit numerical parameter called low-rank threshold, which is critical in order to solve the sparse linear system accurately. Details on how Block Low-Rank factorizations can be efficiently implemented within multifrontal solvers are then given. We propose several Block Low-Rank factorization algorithms which allow for different types of gains. The proposed algorithms have been implemented within the MUMPS (MUltifrontal Massively Parallel Solver) solver. We first report experiments on standard partial differential equations based problems to analyse the main features of our BLR algorithms and to show the potential and flexibility of the approach; a comparison with a Hierarchically SemiSeparable code is also given. Then, Block Low-Rank formats are experimented on large (up to a hundred millions of unknowns) and various problems coming from several industrial applications. We finally illustrate the use of our approach as a preconditioning method for the Conjugate Gradient.Nous considĂ©rons la rĂ©solution de trĂšs grands systĂšmes linĂ©aires creux Ă  l'aide d'une mĂ©thode de factorisation directe appelĂ©e mĂ©thode multifrontale. Bien que numĂ©riquement robustes et faciles Ă  utiliser (elles ne nĂ©cessitent que des informations algĂ©briques : la matrice d'entrĂ©e A et le second membre b, mĂȘme si elles peuvent exploiter des stratĂ©gies de prĂ©traitement basĂ©es sur des informations gĂ©omĂ©triques), les mĂ©thodes directes sont trĂšs coĂ»teuses en termes de mĂ©moire et d'opĂ©rations, ce qui limite leur applicabilitĂ© Ă  des problĂšmes de taille raisonnable (quelques millions d'Ă©quations). Cette Ă©tude se concentre sur l'exploitation des approximations de rang-faible dans la mĂ©thode multifrontale, pour rĂ©duire sa consommation mĂ©moire et son volume d'opĂ©rations, dans des environnements sĂ©quentiel et Ă  mĂ©moire distribuĂ©e, sur une large classe de problĂšmes. D'abord, nous examinons les formats rang-faible qui ont dĂ©jĂ  Ă©tĂ© dĂ©veloppĂ© pour reprĂ©senter efficacement les matrices denses et qui ont Ă©tĂ© utilisĂ©es pour concevoir des solveur rapides pour les Ă©quations aux dĂ©rivĂ©es partielles, les Ă©quations intĂ©grales et les problĂšmes aux valeurs propres. Ces formats sont hiĂ©rarchiques (les formats H et HSS sont les plus rĂ©pandus) et il a Ă©tĂ© prouvĂ©, en thĂ©orie et en pratique, qu'ils permettent de rĂ©duire substantiellement les besoins en mĂ©moire et opĂ©ration des calculs d'algĂšbre linĂ©aire. Cependant, de nombreuses contraintes structurelles sont imposĂ©es sur les problĂšmes visĂ©s, ce qui peut limiter leur efficacitĂ© et leur applicabilitĂ© aux solveurs multifrontaux gĂ©nĂ©raux. Nous proposons un format plat appelĂ© Block Rang-Faible (BRF) basĂ© sur un dĂ©coupage naturel de la matrice en blocs et expliquons pourquoi il fournit toute la flexibilitĂ© nĂ©cĂ©ssaire Ă  son utilisation dans un solveur multifrontal gĂ©nĂ©ral, en terme de pivotage numĂ©rique et de parallĂ©lisme. Nous comparons le format BRF avec les autres et montrons que le format BRF ne compromet que peu les amĂ©liorations en mĂ©moire et opĂ©ration obtenues grĂące aux approximations rang-faible. Une Ă©tude de stabilitĂ© montre que les approximations sont bien contrĂŽlĂ©es par un paramĂštre numĂ©rique explicite appelĂ© le seuil rang-faible, ce qui est critique dans l'optique de rĂ©soudre des systĂšmes linĂ©aires creux avec prĂ©cision. Ensuite, nous expliquons comment les factorisations exploitant le format BRF peuvent ĂȘtre efficacement implĂ©mentĂ©es dans les solveurs multifrontaux. Nous proposons plusieurs algorithmes de factorisation BRF, ce qui permet d'atteindre diffĂ©rents objectifs. Les algorithmes proposĂ©s ont Ă©tĂ© implĂ©mentĂ©s dans le solveur multifrontal MUMPS. Nous prĂ©sentons tout d'abord des expĂ©riences effectuĂ©es avec des Ă©quations aux dĂ©rivĂ©es partielles standardes pour analyser les principales propriĂ©tĂ©s des algorithms BRF et montrer le potentiel et la flexibilitĂ© de l'approche ; une comparaison avec un code basĂ© sur le format HSS est Ă©galement fournie. Ensuite, nous expĂ©rimentons le format BRF sur des problĂšmes variĂ©s et de grande taille (jusqu'Ă  une centaine de millions d'inconnues), provenant de nombreuses applications industrielles. Pour finir, nous illustrons l'utilisation de notre approche en tant que prĂ©conditionneur pour la mĂ©thode du Gradient ConjuguĂ©

    Shared memory parallelism and low-rank approximation techniques applied to direct solvers in FEM simulation

    Get PDF
    International audienceIn this paper, the performance of a parallel sparse direct solver on a shared memory multicore system is presented. Large size test matrices arising from finite element simulation of induction heating industrial applications are used in order to evaluate the performance improvements due to low-rank representations and multicore parallelizatio

    Recent advances in sparse direct solvers

    Get PDF
    International audienceDirect methods for the solution of sparse systems of linear equations of the form A x = b are used in a wide range of numerical simulation applications. Such methods are based on the decomposition of the matrix into a product of triangular factors (e.g., A = L U ), followed by triangular solves. They are known for their numerical accuracy and robustness but are also characterized by a high memory consumption and a large amount of computations. Here we survey some research directions that are being investigated by the sparse direct solver community to alleviate these issues: memory-aware scheduling techniques, low-rank approximations, and distributed/shared memory hybrid programming

    Cost-effective alternative to aminocaproic acid syrup

    No full text

    Alternative solution for Opticrom

    No full text

    Stability of mitomycin for ophthalmic use

    No full text

    Improving Multifrontal methods by means of Low-Rank Approximations techniques

    No full text
    International audienceMatrices coming from elliptic Partial Differential Equations (PDEs) have been shown to have a low-rank property: well defined off-diagonal blocks of their Schur complements can be approximated by low-rank products. Given a suitable ordering of the matrix which gives to the blocks a geometrical meaning, such approximations can be computed using an SVD or a rank-revealing QR factorization. The resulting representation offers a substantial reduction of the memory requirement and gives efficient ways to perform many of the basic dense algebra operations. Several strategies have been proposed to exploit this property. We propose a low-rank format called Block Low-Rank (BLR), and explain how it can be used to reduce the memory footprint and the complexity of direct solvers for sparse matrices based on the multifrontal method. We present experimental results that show how the BLR format delivers gains that are comparable to those obtained with hierarchical formats such as Hierarchical matrices (H matrices) and Hierarchically Semi-Separable (HSS matrices) but provides much greater flexibility and ease of use which are essential in the context of a general purpose, algebraic solver

    New Methods to Speed-up the Boundary Element Method in LS-DYNA

    No full text
    LS-DYNA is a general purpose explicit and implicit finite element program used to analyse the non linear dynamic response of three-dimensional solids and fluids. It is developed by Livermore Software Technology Corporation (LSTC). An electromagnetism (EM) module has been added to LS-DYNA for coupled mechanical/thermal/electromagnetic simulations, which have been extensively performed and benchmarked against experimental results for Magnetic Metal Forming (MMF) and Welding (MMW) applications. These simulations are done using a Finite Element Method (FEM) for the conductors coupled with a Boundary Element Method (BEM) for the surrounding air. The BEM has the advantage that it does not require an air mesh, which can be difficult to build when the gaps between conductors are very small, and to adapt when the conductors are moving, with contact possibly arising. Besides, the BEM does not require the introduction of infinite boundary conditions which are somehow artificial and can create discrepancies. On another hand, it generates dense matrices which take time to assemble and solve, and require a lot of memory. In LS-DYNA, the memory issue is handled by using low rank approximations on the off diagonal sub-blocks of the BEM matrices, creating a so-called Block Low Rank (BLR) matrix structure. The issue of the assembly and solve time is now being studied, and we present the so called “Multi-Center” (MC) method where the computation of the far-field submatrices is greatly reduced and the solve time somehow reduced. We will present the method as well as some first results

    3D frequency-domain seismic modeling with a Block Low-Rank algebraic multifrontal direct solver

    Get PDF
    International audienceThree-dimensional frequency-domain full waveform inversion of fixed-spread data can be efficiently performed in the viscoacoustic approximation when seismic modeling is based on a sparse direct solver. We present a new algebraic Block Low- Rank (BLR) multifrontal solver which provides an approximate solution of the time-harmonic wave equation with a reduced operation count, memory demand and volume of communication relative to the full-rank solver. We show some preliminary simulations in the 3D SEG/EAGE overthrust model, that give some insights on the memory and time complexities of the low-rank solver for frequencies of interest in fullwaveform inversion (FWI) applications
    corecore