425,546 research outputs found

    Dynamic Smagorinsky Modeled Large-Eddy Simulations of Turbulence Using Tetrahedral Meshes

    Get PDF
    Eddy-resolving numerical computations of turbulent flows are emerging as viable alternatives to Reynolds Averaged Navier-Stokes (RANS) calculations for flows with an intrinsically steady mean state due to the advances in large-scale parallel computing. In these computations, medium to large turbulent eddies are resolved by the numerics while the smaller or subgrid scales are either modeled or taken care of by the inherent numerical dissipation. To advance the state of the art of unstructured-mesh turbulence simulation capabilities, large eddy simulations (LES) using the dynamic Smagorinsky model (DSM) on tetrahedral meshes are carried out with the space-time conservation element, solution element (CESE) method. In contrast to what has been reported in the literature, the present implementation of dynamic models allows for active backscattering without any ad-hoc limiting of the eddy viscosity calculated from the subgrid-scale model. For the benchmark problems involving compressible isotropic turbulence decay as well as the shock/turbulent boundary layer interaction benchmark problems, no numerical instability associated with kinetic energy growth is observed and the volume percentage of the backscattering portion accounts for about 38-40% of the simulation domain. A slip-wall model in conjunction with the implemented DSM is used to simulate a relatively high Reynolds number Mach 2.85 turbulent boundary layer over a 30 ramp with several tetrahedral meshes and a wall-normal spacing of either & = 10 or & = 20. The computed mean wall pressure distribution, separation region size, mean velocity profiles, and Reynolds stress agree reasonably well with experimental data

    Parallel load balancing strategy for Volume-of-Fluid methods on 3-D unstructured meshes

    Get PDF
    © 2016. This version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/l Volume-of-Fluid (VOF) is one of the methods of choice to reproduce the interface motion in the simulation of multi-fluid flows. One of its main strengths is its accuracy in capturing sharp interface geometries, although requiring for it a number of geometric calculations. Under these circumstances, achieving parallel performance on current supercomputers is a must. The main obstacle for the parallelization is that the computing costs are concentrated only in the discrete elements that lie on the interface between fluids. Consequently, if the interface is not homogeneously distributed throughout the domain, standard domain decomposition (DD) strategies lead to imbalanced workload distributions. In this paper, we present a new parallelization strategy for general unstructured VOF solvers, based on a dynamic load balancing process complementary to the underlying DD. Its parallel efficiency has been analyzed and compared to the DD one using up to 1024 CPU-cores on an Intel SandyBridge based supercomputer. The results obtained on the solution of several artificially generated test cases show a speedup of up to similar to 12x with respect to the standard DD, depending on the interface size, the initial distribution and the number of parallel processes engaged. Moreover, the new parallelization strategy presented is of general purpose, therefore, it could be used to parallelize any VOF solver without requiring changes on the coupled flow solver. Finally, note that although designed for the VOF method, our approach could be easily adapted to other interface-capturing methods, such as the Level-Set, which may present similar workload imbalances. (C) 2014 Elsevier Inc. Allrights reserved.Peer ReviewedPostprint (author's final draft
    • …
    corecore