729 research outputs found

    Real-time 3D rendering of water using CUDA

    Get PDF
    This thesis addresses the real-time simulation of 3D water, both on the CPU and on the GPU. The stable fluids method is extended to 3D, and implemented both on the CPU and on the GPU. The GPU-based implementation is done using the NVIDIA Compute Unified Device Architecture API (Application Programming Interface), shortly CUDA. The stable fluids method requires the use of an iterative sparse linear system solver. Therefore, three solvers were implemented on both CPU and GPU, namely Jacobi, Gauss-Seidel, and Conjugate Gradient solvers. Rendering of water or its velocities, of the moving obstacles, of the static obstacles, and of the world are done using Vertex Buffer Objects (VBOs). In the CPU-based version standard OpenGL VBOs are used, while on the GPU-based version OpenGL-CUDA interoperability VBOs and standard OpenGL VBOs are used

    Particle Modeling of Fuel Plate Melting during Coolant Flow Blockage in HFIR

    Get PDF
    Cooling channel inlet flow blockage has damaged fuel in plate fueled reactors and contributes significantly to the probability of fuel damage based on Probabilistic Risk Assessment. A Smoothed Particle Hydrodynamics (SPH) model for fuel melt from inlet flow blockage for the High Flux Isotope Reactor is created. The model is coded for high throughput graphics processing unit (GPU) calculations. This modeling approach allows movement toward quantification of the uncertainty in fuel coolant flow blockage consequence assessment. The SPH modeling approach is convenient for following movement of fuel and coolant during melt progression and provides a tool for capturing the interactions of fuel melting into the coolant. The development of this new model is presented. The implementation of the model for GPU simulation is described. The model is compared against analytical solutions. Modeling of a scaled fuel melt progression is simulated for different conditions showing the sensitivities of melting fuel to conditions in the coolant channel

    Fast, Scalable, and Interactive Software for Landau-de Gennes Numerical Modeling of Nematic Topological Defects

    Get PDF
    Numerical modeling of nematic liquid crystals using the tensorial Landau-de Gennes (LdG) theory provides detailed insights into the structure and energetics of the enormous variety of possible topological defect configurations that may arise when the liquid crystal is in contact with colloidal inclusions or structured boundaries. However, these methods can be computationally expensive, making it challenging to predict (meta)stable configurations involving several colloidal particles, and they are often restricted to system sizes well below the experimental scale. Here we present an open-source software package that exploits the embarrassingly parallel structure of the lattice discretization of the LdG approach. Our implementation, combining CUDA/C++ and OpenMPI, allows users to accelerate simulations using both CPU and GPU resources in either single- or multiple-core configurations. We make use of an efficient minimization algorithm, the Fast Inertial Relaxation Engine (FIRE) method, that is well-suited to large-scale parallelization, requiring little additional memory or computational cost while offering performance competitive with other commonly used methods. In multi-core operation we are able to scale simulations up to supra-micron length scales of experimental relevance, and in single-core operation the simulation package includes a user-friendly GUI environment for rapid prototyping of interfacial features and the multifarious defect states they can promote. To demonstrate this software package, we examine in detail the competition between curvilinear disclinations and point-like hedgehog defects as size scale, material properties, and geometric features are varied. We also study the effects of an interface patterned with an array of topological point-defects.Comment: 16 pages, 6 figures, 1 youtube link. The full catastroph

    Efficient algorithms for the realistic simulation of fluids

    Get PDF
    Nowadays there is great demand for realistic simulations in the computer graphics field. Physically-based animations are commonly used, and one of the more complex problems in this field is fluid simulation, more so if real-time applications are the goal. Videogames, in particular, resort to different techniques that, in order to represent fluids, just simulate the consequence and not the cause, using procedural or parametric methods and often discriminating the physical solution. This need motivates the present thesis, the interactive simulation of free-surface flows, usually liquids, which are the feature of interest in most common applications. Due to the complexity of fluid simulation, in order to achieve real-time framerates, we have resorted to use the high parallelism provided by actual consumer-level GPUs. The simulation algorithm, the Lattice Boltzmann Method, has been chosen accordingly due to its efficiency and the direct mapping to the hardware architecture because of its local operations. We have created two free-surface simulations in the GPU: one fully in 3D and another restricted only to the upper surface of a big bulk of fluid, limiting the simulation domain to 2D. We have extended the latter to track dry regions and is also coupled with obstacles in a geometry-independent fashion. As it is restricted to 2D, the simulation loses some features due to the impossibility of simulating vertical separation of the fluid. To account for this we have coupled the surface simulation to a generic particle system with breaking wave conditions; the simulations are totally independent and only the coupling binds the LBM with the chosen particle system. Furthermore, the visualization of both systems is also done in a realistic way within the interactive framerates; raycasting techniques are used to provide the expected light-related effects as refractions, reflections and caustics. Other techniques that improve the overall detail are also applied as low-level detail ripples and surface foam

    Toward a GPU-Accelerated Immersed Boundary Method for Wind Forecasting Over Complex Terrain

    Get PDF
    A short-term wind power forecasting capability can be a valuable tool in the renewable energy industry to address load-balancing issues that arise from intermittent wind fields. Although numerical weather prediction models have been used to forecast winds, their applicability to micro-scale atmospheric boundary layer flows and ability to predict wind speeds at turbine hub height with a desired accuracy is not clear. To address this issue, we develop a multi-GPU parallel flow solver to forecast winds over complex terrain at the micro-scale, where computational domain size can range from meters to several kilometers. In the solver, we adopt the immersed boundary method and the Lagrangian dynamic large-eddy simulation model and extend them to atmospheric flows. The computations are accelerated on GPU clusters with a dual-level parallel implementation that interleaves MPI with CUDA. We evaluate the flow solver components against test problems and obtain preliminary results of flow over Bolund Hill, a coastal hill in Denmark

    A Smoothed Particle Hydrodynamics Method for the Simulation of Centralized Sloshing Experiments

    Get PDF
    The Smoothed Particle Hydrodynamics (SPH) method is proposed for studying hydrodynamic processes related to nuclear engineering problems. A problem of possible recriticality due to the sloshing motions of the molten reactor core is studied with SPH method. The accuracy of the numerical solution obtained in this study with the SPH method is significantly higher than that obtained with the SIMMER-III/IV reactor safety analysis code

    Accelerating Eulerian Fluid Simulation With Convolutional Networks

    Full text link
    Efficient simulation of the Navier-Stokes equations for fluid flow is a long standing problem in applied mathematics, for which state-of-the-art methods require large compute resources. In this work, we propose a data-driven approach that leverages the approximation power of deep-learning with the precision of standard solvers to obtain fast and highly realistic simulations. Our method solves the incompressible Euler equations using the standard operator splitting method, in which a large sparse linear system with many free parameters must be solved. We use a Convolutional Network with a highly tailored architecture, trained using a novel unsupervised learning framework to solve the linear system. We present real-time 2D and 3D simulations that outperform recently proposed data-driven methods; the obtained results are realistic and show good generalization properties.Comment: Significant revisio
    • …
    corecore