25 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Étude de la consommation de puissance du processeur et de la mémoire des solveurs couplés creux/denses

    Get PDF
    In the aeronautical industry, aeroacoustics is used to model the propagation of acoustic waves in air flows enveloping an aircraft in flight. This for instance allows one to simulate the noise produced at ground level by an aircraft during the takeoff and landing phases, in order to validate that the regulatory environmental standards are met. Unlike most other complex physics simulations, the method resorts to solving coupled sparse/dense systems. In a previous study, we proposed two classes of algorithms for solving such large systems on a relatively small workstation (one or a few multicore nodes) based on compression techniques. The objective of this study is to assess whether the positive impact of the proposed algorithms on time to solution and memory usage translates to the energy consumption as well. Because of the nature of the problem, coupling dense and sparse matrices, and the underlying solutions methods, including dense, sparse direct and compression steps, this yields an interesting processor and memory power profile which we aim to analyze in details.Dans l'industrie aéronautique, l'aéroacoustique est utilisée pour modéliser la propagation d'ondes acoustiques à travers des courants d'air enveloppant un avion en vol. Par exemple, cela permet de simuler le bruit produit au niveau du sol par un avion pendant les phases de décollage et d'atterrissage afin de vérifier si les standards environnementaux réglementaires sont respectés. Contrairement à la plupart des simulations de problèmes physiques complexes, la méthode repose sur la solution de systèmes couplés creux/denses. Dans une précédente étude, nous avons proposé deux classes d'algorithmes pour résoudre ce type de grands systèmes linéaires sur une machine relativement petite (un ou peu de nœuds multi-cœurs) basés sur des techniques de compression. L'objectif de cette étude est de déterminer si l'impact positif de ces algorithmes sur l'utilisation de la mémoire se traduit également dans la consommation énergétique. Vu la nature du problème, le couplage de matrices creuses et denses ainsi que les méthodes de résolution sous-jacentes, y compris les étapes de compression creuse et dense, cela conduit à un profil de consommation de puissance du processeur et de la mémoire très intéressant que nous avons l'intention d'analyser en détails

    Hybrid direct and interactive solvers for sparse indefinite and overdetermined systems on future exascale architectures

    Get PDF
    In scientific computing, the numerical simulation of systems is crucial to get a deep understanding of the physics underlying real world applications. The models used in simulation are often based on partial differential equations (PDE) which, after fine discretisation, give rise to huge sparse systems of equations to solve. Historically, 2 classes of methods were designed for the solution of such systems: direct methods, robust but expensive in both computations and memory; and iterative methods, cheap but with a very problem-dependent convergence properties. In the context of high performance computing, hybrid direct-iterative methods were then introduced inorder to combine the advantages of both methods, while using efficiently the increasingly largeand fast supercomputing facilities. In this thesis, we focus on the latter type of methods with two complementary research axis.In the first chapter, we detail the mechanisms behind the efficient implementation of multigrid methods. The latter makes use of several levels of increasingly refined grids to solve linear systems with a combination of fine grid smoothing and coarse grid corrections. The efficient parallel implementation of such a scheme is a difficult task. We focus on the solution of the problem on the coarse grid whose scalability is often observed as limiting at very large scales. We propose an agglomeration technique to gather the data of the coarse grid problem on a subset ofthe computing resources in order to minimise the execution time of a direct solver. Combined with a relaxation of the solution accuracy, we demonstrate an increased overall scalability of the multigrid scheme when using our approach compared to classical iterative methods, when the problem is numerically difficult. At extreme scale, this study is carried in the HHG framework(Hierarchical Hybrid Grids) for the solution of a Stokes problem with jumping coefficients, inspired from Earth's mantle convection simulation. The direct solver used on the coarse grid is MUMPS,combined with block low-rank approximation and single precision arithmetic.In the following chapters, we study some hybrid methods derived from the classical row-projection method block Cimmino, and interpreted as domain decomposition methods. These methods are based on the partitioning of the matrix into blocks of rows. Due to its known slow convergence, the original iterative scheme is accelerated with a stabilised block version of the conjugate gradient algorithm. While an optimal choice of block size improves the efficiency of this approach, the convergence stays problem dependent. An alternative solution is then introduced which enforces a convergence in one iteration by embedding the linear system into a carefully augmented space.These two approaches are extended in order to compute the minimum norm solution of in definite systems and the solution of least-squares problems. The latter problems require a partitioning in blocks of columns. We show how to improve the numerical properties of the iterative and pseudo-direct methods with scaling, partitioning and better augmentation methods. Both methods are implemented in the parallel solver ABCD-Solver (Augmented Block Cimmino Distributed solver)whose parallelisation we improve through a combination of load balancing and communication minimising techniques.Finally, for the solution of discretised PDE problems, we propose a new approach which augments the linear system using a coarse representation of the space. The size of the augmentation is controlled by the choice of a more or less refined mesh. We obtain an iterative method with fast linear convergence demonstrated on Helmholtz and Convection-Diffusion problems. The central point of the approach is the iterative construction and solution of a Schur complemen

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    SPICA:revealing the hearts of galaxies and forming planetary systems : approach and US contributions

    Get PDF
    How did the diversity of galaxies we see in the modern Universe come to be? When and where did stars within them forge the heavy elements that give rise to the complex chemistry of life? How do planetary systems, the Universe's home for life, emerge from interstellar material? Answering these questions requires techniques that penetrate dust to reveal the detailed contents and processes in obscured regions. The ESA-JAXA Space Infrared Telescope for Cosmology and Astrophysics (SPICA) mission is designed for this, with a focus on sensitive spectroscopy in the 12 to 230 micron range. SPICA offers massive sensitivity improvements with its 2.5-meter primary mirror actively cooled to below 8 K. SPICA one of 3 candidates for the ESA's Cosmic Visions M5 mission, and JAXA has is committed to their portion of the collaboration. ESA will provide the silicon-carbide telescope, science instrument assembly, satellite integration and testing, and the spacecraft bus. JAXA will provide the passive and active cooling system (supporting the

    The Apertif Surveys:The First Six Months

    Get PDF
    Apertif is a new phased-array feed for the Westerbork Synthesis Radio Telescope (WSRT), greatly increasing its field of view and turning it into a natural survey instrument. In July 2019, the Apertif legacy surveys commenced; these are a time-domain survey and a two-tiered imaging survey, with a shallow and medium-deep component. The time-domain survey searches for new (millisecond) pulsars and fast radio bursts (FRBs). The imaging surveys provide neutral hydrogen (HI), radio continuum and polarization data products. With a bandwidth of 300 MHz, Apertif can detect HI out to a redshift of 0.26. The key science goals to be accomplished by Apertif include localization of FRBs (including real-time public alerts), the role of environment and interaction on galaxy properties and gas removal, finding the smallest galaxies, connecting cold gas to AGN, understanding the faint radio population, and studying magnetic fields in galaxies. After a proprietary period, survey data products will be publicly available through the Apertif Long Term Archive (ALTA, https://alta.astron.nl). I will review the progress of the surveys and present the first results from the Apertif surveys, including highlighting the currently available public data
    corecore