285 research outputs found

    The Living Application: a Self-Organising System for Complex Grid Tasks

    Full text link
    We present the living application, a method to autonomously manage applications on the grid. During its execution on the grid, the living application makes choices on the resources to use in order to complete its tasks. These choices can be based on the internal state, or on autonomously acquired knowledge from external sensors. By giving limited user capabilities to a living application, the living application is able to port itself from one resource topology to another. The application performs these actions at run-time without depending on users or external workflow tools. We demonstrate this new concept in a special case of a living application: the living simulation. Today, many simulations require a wide range of numerical solvers and run most efficiently if specialized nodes are matched to the solvers. The idea of the living simulation is that it decides itself which grid machines to use based on the numerical solver currently in use. In this paper we apply the living simulation to modelling the collision between two galaxies in a test setup with two specialized computers. This simulation switces at run-time between a GPU-enabled computer in the Netherlands and a GRAPE-enabled machine that resides in the United States, using an oct-tree N-body code whenever it runs in the Netherlands and a direct N-body solver in the United States.Comment: 26 pages, 3 figures, accepted by IJHPC

    Towards Distributed Petascale Computing

    Get PDF
    In this chapter we will argue that studying such multi-scale multi-science systems gives rise to inherently hybrid models containing many different algorithms best serviced by different types of computing environments (ranging from massively parallel computers, via large-scale special purpose machines to clusters of PC's) whose total integrated computing capacity can easily reach the PFlop/s scale. Such hybrid models, in combination with the by now inherently distributed nature of the data on which the models `feed' suggest a distributed computing model, where parts of the multi-scale multi-science model are executed on the most suitable computing environment, and/or where the computations are carried out close to the required data (i.e. bring the computations to the data instead of the other way around). We presents an estimate for the compute requirements to simulate the Galaxy as a typical example of a multi-scale multi-physics application, requiring distributed Petaflop/s computational power.Comment: To appear in D. Bader (Ed.) Petascale, Computing: Algorithms and Applications, Chapman & Hall / CRC Press, Taylor and Francis Grou

    Algorithmic comparisons of decaying, isothermal, supersonic turbulence

    Full text link
    Contradicting results have been reported in the literature with respect to the performance of the numerical techniques employed for the study of supersonic turbulence. We aim at characterising the performance of different particle-based and grid-based techniques on the modelling of decaying supersonic turbulence. Four different grid codes (ENZO, FLASH, TVD, ZEUS) and three different SPH codes (GADGET, PHANTOM, VINE) are compared. We additionally analysed two calculations denoted as PHANTOM A and PHANTOM B using two different implementations of artificial viscosity. Our analysis indicates that grid codes tend to be less dissipative than SPH codes, though details of the techniques used can make large differences in both cases. For example, the Morris & Monaghan viscosity implementation for SPH results in less dissipation (PHANTOM B and VINE versus GADGET and PHANTOM A). For grid codes, using a smaller diffusion parameter leads to less dissipation, but results in a larger bottleneck effect (our ENZO versus FLASH runs). As a general result, we find that by using a similar number of resolution elements N for each spatial direction means that all codes (both grid-based and particle-based) show encouraging similarity of all statistical quantities for isotropic supersonic turbulence on spatial scales k<N/32 (all scales resolved by more than 32 grid cells), while scales smaller than that are significantly affected by the specific implementation of the algorithm for solving the equations of hydrodynamics. At comparable numerical resolution, the SPH runs were on average about ten times more computationally intensive than the grid runs, although with variations of up to a factor of ten between the different SPH runs and between the different grid runs. (abridged)Comment: accepted by A&A, 22 pages, 14 figure

    Multiphysics simulations: challenges and opportunities.

    Full text link

    Large Scale Computing and Storage Requirements for High Energy Physics

    Full text link

    AREPO-RT: Radiation hydrodynamics on a moving mesh

    Full text link
    We introduce AREPO-RT, a novel radiation hydrodynamic (RHD) solver for the unstructured moving-mesh code AREPO. Our method solves the moment-based radiative transfer equations using the M1 closure relation. We achieve second order convergence by using a slope limited linear spatial extrapolation and a first order time prediction step to obtain the values of the primitive variables on both sides of the cell interface. A Harten-Lax-Van Leer flux function, suitably modified for moving meshes, is then used to solve the Riemann problem at the interface. The implementation is fully conservative and compatible with the individual timestepping scheme of AREPO. It incorporates atomic Hydrogen (H) and Helium (He) thermochemistry, which is used to couple the ultra-violet (UV) radiation field to the gas. Additionally, infrared radiation is coupled to the gas under the assumption of local thermodynamic equilibrium between the gas and the dust. We successfully apply our code to a large number of test problems, including applications such as the expansion of HII{\rm H_{II}} regions, radiation pressure driven outflows and the levitation of optically thick layer of gas by trapped IR radiation. The new implementation is suitable for studying various important astrophysical phenomena, such as the effect of radiative feedback in driving galactic scale outflows, radiation driven dusty winds in high redshift quasars, or simulating the reionisation history of the Universe in a self consistent manner.Comment: v2, accepted for publication in MNRAS, changed to a Strang split scheme to achieve second order convergenc

    Ein Gas-Kinetic Scheme Ansatz zur Modellierung und Simulation von Feuer auf massiv paralleler Hardware

    Get PDF
    This work presents a simulation approach based on a Gas Kinetic Scheme (GKS) for the simulation of fire that is implemented on massively parallel hardware in terms of Graphics Processing Units (GPU) in the framework of General Purpose computing on Graphics Processing Units (GPGPU). Gas kinetic schemes belong to the class of kinetic methods because their governing equation is the mesoscopic Boltzmann equation, rather than the macroscopic Navier-Stokes equations. Formally, kinetic methods have the advantage of a linear advection term which simplifies discretization. GKS inherently contains the full energy equation which is required for compressible flows. GKS provides a flux formulation derived from kinetic theory and is usually implemented as a finite volume method on cell-centered grids. In this work, we consider an implementation on nested Cartesian grids. To that end, a coupling algorithm for uniform grids with varying resolution was developed and is presented in this work. The limitation to local uniform Cartesian grids allows an efficient implementation on GPUs, which belong to the class of many core processors, i.e. massively parallel hardware. Multi-GPU support is also implemented and efficiency is enhanced by communication hiding. The fluid solver is validated for several two- and three-dimensional test cases including natural convection, turbulent natural convection and turbulent decay. It is subsequently applied to a study of boundary layer stability of natural convection in a cavity with differentially heated walls and large temperature differences. The fluid solver is further augmented by a simple combustion model for non-premixed flames. It is validated by comparison to experimental data for two different fire plumes. The results are further compared to the industry standard for fire simulation, i.e. the Fire Dynamics Simulator (FDS). While the accuracy of GKS appears slightly reduced as compared to FDS, a substantial speedup in terms of time to solution is found. Finally, GKS is applied to the simulation of a compartment fire. This work shows that the GKS has a large potential for efficient high performance fire simulations.Diese Arbeit präsentiert einen Simulationsansatz basierend auf einer gaskinetischen Methode (eng. Gas Kinetic Scheme, GKS) zur Simulation von Bränden, welcher für massiv parallel Hardware im Sinne von Grafikprozessoren (eng. Graphics Processing Units, GPUs) implementiert wurde. GKS gehört zur Klasse der kinetischen Methoden, die nicht die makroskopischen Navier-Stokes Gleichungen, sondern die mesoskopische Boltzmann Gleichung lösen. Formal haben kinetische Methoden den Vorteil, dass der Advektionsterms linear ist. Dies vereinfacht die Diskretisierung. In GKS ist die vollständige Energiegleichung, die zur Lösung kompressibler Strömungen benötigt wird, enthalten. GKS formuliert den Fluss von Erhaltungsgrößen basierend auf der gaskinetischen Theorie und wird meistens im Rahmen der Finiten Volumen Methode umgesetzt. In dieser Arbeit betrachten wir eine Implementierung auf gleichmäßigen Kartesischen Gittern. Dazu wurde ein Kopplungsalgorithmus für die Kombination von Gittern unterschiedlicher Auflösung entwickelt. Die Einschränkung auf lokal gleichmäßige Gitter erlaubt eine effiziente Implementierung auf GPUs, welche zur Klasse der massiv parallelen Hardware gehören. Des Weiteren umfasst die Implementierung eine Unterstützung für Multi-GPU mit versteckter Kommunikation. Der Strömungslöser ist für zwei und dreidimensionale Testfälle validiert. Dabei reichen die Tests von natürlicher Konvektion über turbulente Konvektion bis hin zu turbulentem Zerfall. Anschließend wird der Löser genutzt um die Grenzschichtstabilität in natürlicher Konvektion bei großen Temperaturunterschieden zu untersuchen. Darüber hinaus umfasst der Löser ein einfaches Verbrennungsmodell für Diffusionsflammen. Dieses wird durch Vergleich mit experimentellen Feuern validiert. Außerdem werden die Ergebnisse mit dem gängigen Brandsimulationsprogramm FDS (eng. Fire Dynamics Simulator) verglichen. Die Qualität der Ergebnisse ist dabei vergleichbar, allerdings ist der in dieser Arbeit entwickelte Löser deutlich schneller. Anschließend wird das GKS noch für die Simulation eines Raumbrandes angewendet. Diese Arbeit zeigt, dass GKS ein großes Potential für die Hochleistungssimulation von Feuer hat

    Turbulent Transport in Global Models of Magnetized Accretion Disks

    Get PDF
    The modern theory of accretion disks is dominated by the discovery of the magnetorotational instability (MRI). While hydrodynamic disks satisfy Rayleigh's criterion and there exists no known unambiguous route to turbulence in such disks, a weakly magnetized disk of plasma is subject to the MRI and will become turbulent. This MRI-driven magnetohydrodnamic turbulence generates a strong anisotropic correlation between the radial and azimuthal magnetic fields which drives angular momentum outwards. Accretion disks perform two vital functions in various astrophysical systems: an intermediate step in the gravitational collapse of a rotating gas, where the disk transfers angular momentum outwards and allows material to fall inwards; and as a power source, where the gravitational potential energy of infalling matter can be converted to luminosity. Accretion disks are important in astrophysical processes at all scales in the universe. Studying accretion from first principles is difficult, as analytic treatments of turbulent systems have proven quite limited. As such, computer simulations are at the forefront of studying systems this far into the non-linear regime. While computational work is necessary to study accretion disks, it is no panacea. Fully three-dimensional simulations of turbulent astrophysical systems require an enormous amount of computational power that is inaccessible even to sophisticated modern supercomputers. These limitations have necessitated the use of local models, in which a small spatial region of the full disk is simulated, and constrain numerical resolution to what is feasible. These compromises, while necessary, have the potential to introduce numerical artifacts in the resulting simulations. Understanding how to disentangle these artifacts from genuine physical phenomena and to minimize their effect is vital to constructing simulations that can make reliable astrophysical predictions and is the primary concern of the work presented here. The use of local models is predicated on the assumption that these models accurately capture the dynamics of a small patch of a global astrophysical disk. This assumption is tested in detail through the study of local regions of global simulations. To reach resolutions comparable to those used in local simulations an orbital advection algorithm, a semi-Lagrangian reformulation of the fluid equations, is used which allows an order of magnitude increase in computational efficiency. It is found that the turbulence in global simulations agrees at intermediate- and small-scales with local models and that the presence of magnetic flux stimulates angular momentum transport in global simulations in a similar manner to that observed for local ones. However, the importance of this flux-stress connection is shown to cast doubt on the validity of local models due to their inability to accurately capture the temporal evolution of the magnetic flux seen in global simulations. The use of orbital advection allows the ability to probe previously-inaccessible resolutions in global simulations and is the basis for a rigorous resolution study presented here. Included are the results of a study utilizing a series of global simulations of varying resolutions and initial magnetic field topologies where a collection of proposed metrics of numerical convergence are explored. The resolution constraints necessary to establish numerical convergence of astrophysically-important measurements are presented along with evidence suggesting that the use of proper azimuthal resolution, while computationally-demanding, is vital to achieving convergence. The majority of the proposed metrics are found to be useful diagnostics of MRI-driven turbulence, however they suffer as metrics of convergence due to their dependence on the initial magnetic field topology. In contrast to this, the magnetic tilt angle, a measure of the planar anisotropy of the magnetic field, is found to be a powerful tool for diagnosing convergence independent of initial magnetic field topology

    Supercomputing in Aerospace

    Get PDF
    Topics addressed include: numerical aerodynamic simulation; computational mechanics; supercomputers; aerospace propulsion systems; computational modeling in ballistics; turbulence modeling; computational chemistry; computational fluid dynamics; and computational astrophysics
    corecore