272 research outputs found

    Inertial Coupling Method for particles in an incompressible fluctuating fluid

    Full text link
    We develop an inertial coupling method for modeling the dynamics of point-like 'blob' particles immersed in an incompressible fluid, generalizing previous work for compressible fluids. The coupling consistently includes excess (positive or negative) inertia of the particles relative to the displaced fluid, and accounts for thermal fluctuations in the fluid momentum equation. The coupling between the fluid and the blob is based on a no-slip constraint equating the particle velocity with the local average of the fluid velocity, and conserves momentum and energy. We demonstrate that the formulation obeys a fluctuation-dissipation balance, owing to the non-dissipative nature of the no-slip coupling. We develop a spatio-temporal discretization that preserves, as best as possible, these properties of the continuum formulation. In the spatial discretization, the local averaging and spreading operations are accomplished using compact kernels commonly used in immersed boundary methods. We find that the special properties of these kernels make the discrete blob a particle with surprisingly physically-consistent volume, mass, and hydrodynamic properties. We develop a second-order semi-implicit temporal integrator that maintains discrete fluctuation-dissipation balance, and is not limited in stability by viscosity. Furthermore, the temporal scheme requires only constant-coefficient Poisson and Helmholtz linear solvers, enabling a very efficient and simple FFT-based implementation on GPUs. We numerically investigate the performance of the method on several standard test problems...Comment: Contains a number of corrections and an additional Figure 7 (and associated discussion) relative to published versio

    Hybrid smoothed particle hydrodynamics

    Get PDF
    We present a new algorithm for enforcing incompressibility for Smoothed Particle Hydrodynamics (SPH) by preserving uniform density across the domain. We propose a hybrid method that uses a Poisson solve on a coarse grid to enforce a divergence free velocity field, followed by a local density correction of the particles. This avoids typical grid artifacts and maintains the Lagrangian nature of SPH by directly transferring pressures onto particles. Our method can be easily integrated with existing SPH techniques such as the incompressible PCISPH method as well as weakly compressible SPH by adding an additional force term. We show that this hybrid method accelerates convergence towards uniform density and permits a significantly larger time step compared to earlier approaches while producing similar results. We demonstrate our approach in a variety of scenarios with significant pressure gradients such as splashing liquids

    Radiation-Induced Error Criticality in Modern HPC Parallel Accelerators

    Get PDF
    In this paper, we evaluate the error criticality of radiation-induced errors on modern High-Performance Computing (HPC) accelerators (Intel Xeon Phi and NVIDIA K40) through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. Also, we provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. We apply the selected metrics to experimental results obtained in various radiation test campaigns for a total of more than 400 hours of beam time per device. The amount of data we gathered allows us to evaluate the error criticality of a representative set of algorithms from HPC suites. Additionally, based on the characteristics of the tested algorithms, we draw generic reliability conclusions for broader classes of codes. We show that arithmetic operations are less critical for the K40, while Xeon Phi is more reliable when executing particles interactions solved through Finite Difference Methods. Finally, iterative stencil operations seem the most reliable on both architectures.This work was supported by the STIC-AmSud/CAPES scientific cooperation program under the EnergySFE research project grant 99999.007556/2015-02, EU H2020 Programme, and MCTI/RNP-Brazil under the HPC4E Project, grant agreement n° 689772. Tested K40 boards were donated thanks to Steve Keckler, Timothy Tsai, and Siva Hari from NVIDIA.Postprint (author's final draft

    Future Simulations of Tidal Disruption Events

    Get PDF
    Tidal disruption events involve numerous physical processes (fluid dynamics, magnetohydrodynamics, radiation transport, self-gravity, general relativistic dynamics) in highly nonlinear ways, and, because TDEs are transients by definition, frequently in non-equilibrium states. For these reasons, numerical solution of the relevant equations can be an essential tool for studying these events. In this chapter, we present a summary of the key problems of the field for which simulations offer the greatest promise and identify the capabilities required to make progress on them. We then discuss what has been---and what cannot be---done with existing numerical methods. We close with an overview of what methods now under development may do to expand our ability to understand these events.Comment: A chapter in the ISSI review book "The Tidal Disruption of Stars by Massive Black Holes", to be published in Space Science Review

    Investigating applications portability with the Uintah DAG-based runtime system on PetaScale supercomputers

    Get PDF
    pre-printPresent trends in high performance computing present formidable challenges for applications code using multicore nodes possibly with accelerators and/or co-processors and reduced memory while still attaining scalability. Software frameworks that execute machine-independent applications code using a runtime system that shields users from architectural complexities offer a possible solution. The Uintah framework for example, solves a broad class of large-scale problems on structured adaptive grids using fluid-flow solvers coupled with particle-based solids methods. Uintah executes directed acyclic graphs of computational tasks with a scalable asynchronous and dynamic runtime system for CPU cores and/or accelerators/coprocessors on a node. Uintah's clear separation between application and runtime code has led to scalability increases of 1000x without significant changes to application code. This methodology is tested on three leading Top500 machines; OLCF Titan, TACC Stampede and ALCF Mira using three diverse and challenging applications problems. This investigation of scalability with regard to the different processors and communications performance leads to the overall conclusion that the adaptive DAG-based approach provides a very powerful abstraction for solving challenging multi-scale multi-physics engineering problems on some of the largest and most powerful computers available today
    corecore