14 research outputs found

    An interpolating particle method for the Vlasov-Poisson equation

    Full text link
    In this paper we present a novel particle method for the Vlasov--Poisson equation. Unlike in conventional particle methods, the particles are not interpreted as point charges, but as point values of the distribution function. In between the particles, the distribution function is reconstructed using mesh-free interpolation. Our numerical experiments confirm that this approach results in significantly increased accuracy and noise reduction. At the same time, many benefits of the conventional schemes are preserved

    Eulerian ISPH Method for Simulating Internal Flows

    Get PDF
    In this article the possibility to use Eulerian approach in the conventional ISPH method in simulation of internal fluid flows is studied. The use of Eulerian approach makes it possible to use non-uniform particle distributions to increase the resolution in the sensitive parts of the domain, different boundary conditions can be employed more freely and particle penetration in the solid walls and tensile instability no longer require elaborate procedures. The governing equations are solved in an Eulerian framework containing both the temporal and local derivatives which make the momentum equations non-linear. Some special treatment and smaller time steps are required to remedy this non-linearity of the problem. In this study, projection method is used to enforce incompressibility with the evaluation of an intermediate velocity and then this velocity is projected on the divergence-free space. This method is applied to the internal fluid flows in a shear-driven cavity, Couette flow, a flow inside a duct with variable area and flow around a circular cylinder within a constant area duct. The results are compared with the results of Lagrangian ISPH and WCSPH methods as well as finite volume and Lattice Boltzmann grid based schemes. The results of the studied scheme have the same accuracy for velocity field and have better accuracy in pressure distribution than ISPH and WCSPH methods. Non-uniform particle distributions are also studied to check the applicability of this method and Good agreement is also observed between uniform and non-uniform particle distributions

    A hybrid fluid simulation on the Graphics Processing Unit (GPU)

    Get PDF
    This thesis presents a method to implement a hybrid particle/grid uid simulation on graphics hardware. The goal is to speed up the simulation by exploiting the parallelism of the graphics processing unit, or GPU. The Fluid Implicit Particle method is adapted to the programming style of the GPU. The methods were implemented on a current generation graphics card. The GPU based program exhibited a small speedup over its CPU based counterpart

    CFD STUDY OF DECAY FUNCTION OF WALL SHEAR STRESS WITH SCOUR AROUND COMPLEX-SHAPE BRIDGE PIER

    Get PDF
    Pier scour problem is close related to the safety of bridges. The Federal Highway Administration of the United State also has a great interest in studying the relation between hydraulic loadings and scour depth. The current technological problem of directly measuring hydraulic loadings on a dynamic bed using physical experiments and the weak capacity to simulate the real scour processes with CFD methods inspired the development of a hybrid approach by combining them to study pier scour. This research specifically focuses on the CFD part of the hybrid method. A series of three-dimensional (3-D) CFD models were developed with unsteady Reynolds Averaged Navier-Stokes equations and the k- turbulence model to calculate wall shear stress distributions around piers under different kinds of flow conditions. These CFD models were verified and calibrated by comparing wall shear stress distributions with other CFD simulations, which use a DES turbulence model. The CFD simulation results were applied to develop a decay function of dimensionless wall shear stress with relative scour depth. Combined with the previous physical experimental data, the decay function was updated to be an envelope function to more accurately describe the decay trend. The results of CFD modeling for water flows around a rectangular pier with a 30 attack of angle were used to verify the decay function and the envelope function. With surveyed full-scale bathymetries of the Feather River Bridge, this hybrid approach and the decay function were applied to study the pier scour problem. The decay trend and the envelope decay function were verified with the results of CFD modeling for the full-scale models. Using the soil composition of each layer, the application of the decay function was preliminarily developed. Advisor: Junke Gu

    Adding Semi-Structured Automated Grid Generation and the Menter-Shear Stress Turbulence Transport Model for Internal Combustion Engine Simulations to Novel FEM LANL Combustion Codes

    Get PDF
    The addition of GridPro semi-structured, automated generation of grids for complex moving boundaries for combustion engine applications and the Menter Shear Stress Turbulent Transfer (SST) model are being developed by Los Alamos National Laboratory. The software is called Fast, Easy, Accurate, and Robust Continuum Engineering (FEARCE). In addition to improving the time and effort required to build complex grid geometry for turbulent reactive multi-phase flow in internal combustion engines, the SST turbulence model has been programmed into the Predictor Corrector Fractional-Step (PCS) Finite Element Method (FEM) for reactive flow and turbulent incompressible flow regime validation is performed. The Reynolds-Averaged Navier-Stokes finite-element solver without h-adaption is used for validation of the SST turbulence model on two benchmark problems in the subsonic flow regime: (1) 2D duct channel flow, and (2) a 2-D backward-facing step with an applied constant heat flux on the bottom surface downstream of the single-sided sudden expansion of the step. The 2D BFS using the newly installed SST FEARCE code yielded a corresponding X_re = 6.655H vs. that as experimentally determined by Vogel and Eaton 85\u27 of X_re_vogel =6.66667H

    Continuum mechanics and implicit material point method to underpin the modelling of drag anchors for cable risk assessment

    Get PDF
    The last years have seen an extraordinary expansion of the wind offshore industry towards new markets. With this development, wind farms are being pushed further offshore, leading to new challenges in maintaining their infrastructures. Data have revealed that one of the assets most susceptible to risks are power cables transporting the electricity generated offshore to the onshore transmission system. Whenever there is no alternative to shielding the cables, these are buried in the seabed to ward off the threat of drag anchors. Understanding the embedment process of drag anchors deployed by ships for mooring purposes is critical to determining the appropriate burial depth of cables. In turn, this protection strategy must include the influence exerted by the seabed conditions. Historically, studies have focused on lab and field tests, whose results have been recently called into question. The lack of an appropriate scientific tool to investigate the anchor cable interaction has recently fostered this process examination via numerical methods. Among these, the Material Point Method (MPM) is well-placed to master large deformation mechanics without mesh distortion and retains all of the advantages of a Lagrangian method. As a matter of fact, the MPM is the main object of investigation of this thesis, whose ultimate goal is to: 1. include inertia forces in the context of finite strain elasto-plasticity for solid materials; 2. expand the above point to handle the presence of water in the porous seabed; and 3. model the friction between the anchor and the surrounding soil. This work has dedicated particular attention to the compliance of the MPM discretised algorithms with the underlying continuum formulation. In this sense, this workā€™s primary contributions comprise: ā€¢ a conservation law consistent MPM algorithm for solid mechanics; ā€¢ a constitutive relationship for porous materials respectful of the solid mass conservation; and ā€¢ a rigorous assessment of the frictional contact formulations available in the MPM literature

    Lagrangianā€“Eulerian methods for multiphase flows

    Get PDF
    This review article aims to provide a comprehensive and understandable account of the theoretical foundation, modeling issues, and numerical implementation of the Lagrangianā€“Eulerian (LE) approach for multiphase flows. The LE approach is based on a statistical description of the dispersed phase in terms of a stochastic point process that is coupled with a Eulerian statistical representation of the carrier fluid phase. A modeled transport equation for the particle distribution function ā€” also known as Williams\u27 spray equation in the case of sprays ā€” is indirectly solved using a Lagrangian particle method. Interphase transfer of mass, momentum and energy are represented by coupling terms that appear in the Eulerian conservation equations for the fluid phase. This theoretical foundation is then used to develop LE sub-models for interphase interactions such as momentum transfer. Every LE model implies a corresponding closure in the Eulerianā€“Eulerian two-fluid theory, and these moment equations are derived. Approaches to incorporate multiscale interactions between particles (or spray droplets) and turbulent eddies in the carrier gas that result in better predictions of particle (or droplet) dispersion are described. Numerical convergence of LE implementations is shown to be crucial to the success of the LE modeling approach. It is shown how numerical convergence and accuracy of an LE implementation can be established using grid-free estimators and computational particle number density control algorithms. This review of recent advances establishes that LE methods can be used to solve multiphase flow problems of practical interest, provided sub-models are implemented using numerically convergent algorithms. These insights also provide the foundation for further development of Lagrangian methods for multiphase flows. Extensions to the LE method that can account for neighbor particle interactions and preferential concentration of particles in turbulence are outlined

    Modeling Extended Fluid Objects in General Relativity

    Get PDF
    The purpose of this dissertation is to introduce and explore the notion of modeling extended fluid objects in numerical general relativity. These extended fluid objects, called Fat Particles, are proxies for compact hydrodynamic objects. Unlike full hydrodynamic models, we make the approximation that the details of the matter distribution are not as important as the gross motion of the Fat Particle's center of mass and its contribution to the gravitational field. Thus we provide a semi-analytic model of matter for numerical simulations of Einstein's equations, which may help in modeling gravitational radiation from candidate sources. Our approach to carrying out these investigations is to begin with a continuum variational principle, which yields the desired hydrodynamic and gravitational equations for ideal fluids. Following our analysis of the related numerical technique, Smoothed Particle Hydrodynamics (SPH), we apply a set of discretization and smoothing rules to obtain a discrete action. Subsequent variations yield the Fat Particle equations. Our analysis of a classical ideal fluid demonstrated that a Newtonian Fat Particle is capable of remaining at rest while generating its own gravitational field. We then developed analogous principles for describing relativistic ideal fluids in both covariant and ADM 3+1 forms. Using these principles, we developed analytic and numerical results from relativistic Fat Particle theory. We began with the Subscribe Only model, in which a Fat Particle of negligible mass moves in a fixed background metric. Corrections to its motion due to the extended nature of the Fat Particle, are obtained by summing metric contributions over its volume. We find a universal scaling law that describes the phase shift, relative to a test particle, that is independent of its size, shape, and distribution. We then show that finite-size effects eventually dominate radiation damping effects in describing the motion of a white dwarf around a more massive black hole. Finally, we derive the Publish and Subscribe model, which comprises a full back-reacting system. Comparison of the Fat Particle equations for a static, symmetric spacetime with their continuum analogs shows that the system supports a consistent density definition and holds promise for future development

    PaVo un tri parallĆØle adaptatif

    Get PDF
    Gamers are used to throw onto the latest graphics cards to play immersive games which precision, realism and interactivity keep increasing over time. With general-propose processing on graphics processing units, scientists now participate in graphics card use too. First, we examine these architectures interest for large-scale physics simulations. Drawing on this experience, we highlight in particular a bottleneck in simulations performance. Let us consider a typical situation: cracks in complex reinforced concrete structures such as dams are modelised by many particles. Interactions between particles simulate the matter cohesion. In computer memory, each particle is represented by a set of physical parameters used for every force calculations between two particles. Then, to speed up computations, data from particles close in space should be close in memory. Otherwise, the number of cache misses raises up and memory bandwidth may be reached, specially in parallel environments, limiting global performance. The challenge is to maintain data organization during the simulations despite particle movements. Classical sorting algorithms do not suit such situations because they consistently sort all the elements. Besides, they work upon dense structures leading to a lot of memory transfers. We propose PaVo, an adaptive sort which means it benefits from sequence presortedness. Moreover, to reduce the number of necessary memory transfers, PaVo spreads some gaps inside the data structure. We present a large experimental study and confront results to reputed sort algorithms. Reducing memory requests is again more important for large scale simulations with parallel architectures. We detail a parallel version of PaVo and evaluate its interest. To deal with application irregularities, we do load balancing with work-stealing. We take advantage of hierarchical architectures by automatically distributing data in memory. Thus, tasks are pre-assigned to cores with respect to this organization and we adapt the scheduler to favor steals of tasks working on data close in memory.Les joueurs exigeants acquieĢ€rent deĢ€s que possible une carte graphique capable de satisfaire leur soif d'immersion dans des jeux dont la preĢcision, le reĢalisme et l'interactiviteĢ redoublent d'intensiteĢ au fil du temps. Depuis l'aveĢ€nement des cartes graphiques deĢdieĢes au calcul geĢneĢraliste, ils n'en sont plus les seuls clients. Dans un premier temps, nous analysons l'apport de ces architectures paralleĢ€les speĢcifiques pour des simulations physiques aĢ€ grande eĢchelle. Cette eĢtude nous permet de mettre en avant un goulot d'eĢtranglement en particulier limitant la performance des simulations. Partons d'un cas typique : les fissures d'une structure complexe de type barrage en beĢton armeĢ peuvent eĢ‚tre modeĢliseĢes par un ensemble de particules. La coheĢsion de la matieĢ€re ainsi simuleĢe est assureĢe par les interactions entre elles. Chaque particule est repreĢsenteĢe en meĢmoire par un ensemble de parameĢ€tres physiques aĢ€ consulter systeĢmatiquement pour tout calcul de forces entre deux particules. Ainsi, pour que les calculs soient rapides, les donneĢes de particules proches dans l'espace doivent eĢ‚tre proches en meĢmoire. Dans le cas contraire, le nombre de deĢfauts de cache augmente et la limite de bande passante de la meĢmoire peut eĢ‚tre atteinte, particulieĢ€rement en paralleĢ€le, bornant les performances. L'enjeu est de maintenir l'organisation des donneĢes en meĢmoire tout au long de la simulation malgreĢ les mouvements des particules. Les algorithmes de tri standard ne sont pas adapteĢs car ils trient systeĢmatiquement tous les eĢleĢments. De plus, ils travaillent sur des structures denses ce qui implique de nombreux deĢplacements de donneĢes en meĢmoire. Nous proposons PaVo, un algorithme de tri dit adaptatif, c'est-aĢ€-dire qu'il sait tirer parti de l'ordre preĢ-existant dans une seĢquence. De plus, PaVo maintient des trous dans la structure, reĢpartis de manieĢ€re aĢ€ reĢduire le nombre de deĢplacements meĢmoires neĢcessaires. Nous preĢsentons une geĢneĢreuse eĢtude expeĢrimentale et comparons les reĢsultats obtenus aĢ€ plusieurs tris renommeĢs. La diminution des acceĢ€s aĢ€ la meĢmoire a encore plus d'importance pour des simulations aĢ€ grande eĢchelles sur des architectures paralleĢ€les. Nous deĢtaillons une version paralleĢ€le de PaVo et eĢvaluons son inteĢreĢ‚t. Pour tenir compte de l'irreĢgulariteĢ des applications, la charge de travail est eĢquilibreĢe dynamiquement par vol de travail. Nous proposons de distribuer automatiquement les donneĢes en meĢmoire de manieĢ€re aĢ€ profiter des architectures hieĢrarchiques. Les taĢ‚ches sont preĢ-assigneĢes aux cœurs pour utiliser cette distribution et nous adaptons le moteur de vol pour favoriser des vols de taĢ‚ches concernant des donneĢes proches en meĢmoire
    corecore