3,160 research outputs found

    Inertial range turbulence in kinetic plasmas

    Full text link
    The transfer of turbulent energy through an inertial range from the driving scale to dissipative scales in a kinetic plasma followed by the conversion of this energy into heat is a fundamental plasma physics process. A theoretical foundation for the study of this process is constructed, but the details of the kinetic cascade are not well understood. Several important properties are identified: (a) the conservation of a generalized energy by the cascade; (b) the need for collisions to increase entropy and realize irreversible plasma heating; and (c) the key role played by the entropy cascade--a dual cascade of energy to small scales in both physical and velocity space--to convert ultimately the turbulent energy into heat. A strategy for nonlinear numerical simulations of kinetic turbulence is outlined. Initial numerical results are consistent with the operation of the entropy cascade. Inertial range turbulence arises in a broad range of space and astrophysical plasmas and may play an important role in the thermalization of fusion energy in burning plasmas.Comment: 11 pages, 2 figures, submitted to Physics of Plasmas, DPP Meeting Special Issu

    The cosmological simulation code GADGET-2

    Full text link
    We discuss the cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the `tree'-method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different timesteps. Individual and adaptive short-range timesteps may also be employed. The domain decomposition used in the parallelisation algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10^10 dark matter particles, reaching a homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community.Comment: submitted to MNRAS, 31 pages, 20 figures (reduced resolution), code available at http://www.mpa-garching.mpg.de/gadge

    Evolutionary Neural Network Based Energy Consumption Forecast for Cloud Computing

    Get PDF
    The success of Hadoop, an open-source framework for massively parallel and distributed computing, is expected to drive energy consumption of cloud data centers to new highs as service providers continue to add new infrastructure, services and capabilities to meet the market demands. While current research on data center airflow management, HVAC (Heating, Ventilation and Air Conditioning) system design, workload distribution and optimization, and energy efficient computing hardware and software are all contributing to improved energy efficiency, energy forecast in cloud computing remains a challenge. This paper reports an evolutionary computation based modeling and forecasting approach to this problem. In particular, an evolutionary neural network is developed and structurally optimized to forecast the energy load of a cloud data center. The results, both in terms of forecasting speed and accuracy, suggest that the evolutionary neural network approach to energy consumption forecasting for cloud computing is highly promising

    Towards a more realistic sink particle algorithm for the RAMSES code

    Full text link
    We present a new sink particle algorithm developed for the Adaptive Mesh Refinement code RAMSES. Our main addition is the use of a clump finder to identify density peaks and their associated regions (the peak patches). This allows us to unambiguously define a discrete set of dense molecular cores as potential sites for sink particle formation. Furthermore, we develop a new scheme to decide if the gas in which a sink could potentially form, is indeed gravitationally bound and rapidly collapsing. This is achieved using a general integral form of the virial theorem, where we use the curvature in the gravitational potential to correctly account for the background potential. We detail all the necessary steps to follow the evolution of sink particles in turbulent molecular cloud simulations, such as sink production, their trajectory integration, sink merging and finally the gas accretion rate onto an existing sink. We compare our new recipe for sink formation to other popular implementations. Statistical properties such as the sink mass function, the average sink mass and the sink multiplicity function are used to evaluate the impact that our new scheme has on accurately predicting fundamental quantities such as the stellar initial mass function or the stellar multiplicity function.Comment: submitted to MNRAS, 24 pages, 19 figures, 5 table

    An error indicator-based adaptive reduced order model for nonlinear structural mechanics -- application to high-pressure turbine blades

    Full text link
    The industrial application motivating this work is the fatigue computation of aircraft engines' high-pressure turbine blades. The material model involves nonlinear elastoviscoplastic behavior laws, for which the parameters depend on the temperature. For this application, the temperature loading is not accurately known and can reach values relatively close to the creep temperature: important nonlinear effects occur and the solution strongly depends on the used thermal loading. We consider a nonlinear reduced order model able to compute, in the exploitation phase, the behavior of the blade for a new temperature field loading. The sensitivity of the solution to the temperature makes {the classical unenriched proper orthogonal decomposition method} fail. In this work, we propose a new error indicator, quantifying the error made by the reduced order model in computational complexity independent of the size of the high-fidelity reference model. In our framework, when the {error indicator} becomes larger than a given tolerance, the reduced order model is updated using one time step solution of the high-fidelity reference model. The approach is illustrated on a series of academic test cases and applied on a setting of industrial complexity involving 5 million degrees of freedom, where the whole procedure is computed in parallel with distributed memory

    A momentum-conserving, consistent, Volume-of-Fluid method for incompressible flow on staggered grids

    Get PDF
    The computation of flows with large density contrasts is notoriously difficult. To alleviate the difficulty we consider a consistent mass and momentum-conserving discretization of the Navier-Stokes equation. Incompressible flow with capillary forces is modelled and the discretization is performed on a staggered grid of Marker and Cell type. The Volume-of-Fluid method is used to track the interface and a Height-Function method is used to compute surface tension. The advection of the volume fraction is performed using either the Lagrangian-Explicit / CIAM (Calcul d'Interface Affine par Morceaux) method or the Weymouth and Yue (WY) Eulerian-Implicit method. The WY method conserves fluid mass to machine accuracy provided incompressiblity is satisfied which leads to a method that is both momentum and mass-conserving. To improve the stability of these methods momentum fluxes are advected in a manner "consistent" with the volume-fraction fluxes, that is a discontinuity of the momentum is advected at the same speed as a discontinuity of the density. To find the density on the staggered cells on which the velocity is centered, an auxiliary reconstruction of the density is performed. The method is tested for a droplet without surface tension in uniform flow, for a droplet suddenly accelerated in a carrying gas at rest at very large density ratio without viscosity or surface tension, for the Kelvin-Helmholtz instability, for a falling raindrop and for an atomizing flow in air-water conditions
    • …
    corecore