121 research outputs found
The Inertial Range of Turbulence in the Inner Heliosheath and in the Local Interstellar Medium
The governing mechanisms of magnetic field annihilation in the outer heliosphere is an intriguing topic. It is currently believed that the turbulent fluctuations pervade the inner heliosheath (IHS) and the Local Interstellar Medium (LISM). Turbulence, magnetic reconnection, or their reciprocal link may be responsible for magnetic energy conversion in the IHS.
 As 1-day averaged data are typically used, the present literature mainly concerns large-scale analysis and does not describe inertial-cascade dynamics of turbulence in the IHS. Moreover, lack of spectral analysis make IHS dynamics remain critically understudied. Our group showed that 48-s MAG data from the Voyager mission are appropriate for a power spectral analysis over a frequency range of five decades, from 5e-8 Hz to 1e-2 Hz [Gallana et al., JGR 121 (2016)]. Special spectral estimation techniques are used to deal with the large amount of missing data (70%). We provide the first clear evidence of an inertial-cascade range of turbulence (spectral index is between -2 and -1.5). A spectral break at about 1e-5 Hz is found to separate the inertial range from the enegy-injection range (1/f energy decay). Instrumental noise bounds our investigation to frequencies lower than 5e-4 Hz. By considering several consecutive periods after 2009 at both V1 and V2, we show that the extension and the spectral energy decay of these two regimes may be indicators of IHS regions governed by different physical processes. We describe fluctuations’ regimes in terms of spectral energy density, anisotropy, compressibility, and statistical analysis of intermittency.
 In the LISM, it was theorized that pristine interstellar turbulence may coexist with waves from the IHS, however this is still a debated topic. We observe that the fluctuating magnetic energy cascades as a power law with spectral index in the range [-1.35, -1.65] in the whole range of frequencies unaffected by noise. No spectral break is observed, nor decaying turbulence
Efficient kinetic Lattice Boltzmann simulation of three-dimensional Hall-MHD Turbulence
Simulating plasmas in the Hall-MagnetoHydroDynamics (Hall-MHD) regime
represents a valuable {approach for the investigation of} complex non-linear
dynamics developing in astrophysical {frameworks} and {fusion machines}. Taking
into account the Hall electric field is {computationally very challenging as}
it involves {the integration of} an additional term, proportional to \bNabla
\times ((\bNabla\times\mathbf{B})\times \mathbf{B}) in the Faraday's induction
{law}. {The latter feeds back on} the magnetic field at small
scales (between the ion and electron inertial scales), {requiring} very high
resolution{s} in both space and time {in order to properly describe its
dynamics.} The computational {advantage provided by the} kinetic Lattice
Boltzmann (LB) approach is {exploited here to develop a new} code, the
\textbf{\textsc{F}}ast \textbf{\textsc{L}}attice-Boltzmann
\textbf{\textsc{A}}lgorithm for \textbf{\textsc{M}}hd
\textbf{\textsc{E}}xperiments (\textsc{flame}). The \textsc{flame} code
integrates the plasma dynamics in lattice units coupling two kinetic schemes,
one for the fluid protons (including the Lorentz force), the other to solve the
induction equation describing the evolution of the magnetic field. Here, the
newly developed algorithm is tested against an analytical wave-solution of the
dissipative Hall-MHD equations, pointing out its stability and second-order
convergence, over a wide range of the control parameters. Spectral properties
of the simulated plasma are finally compared with those obtained from numerical
solutions from the well-established pseudo-spectral code \textsc{ghost}.
Furthermore, the LB simulations we present, varying the Hall parameter,
highlightthe transition from the MHD to the Hall-MHD regime, in excellent
agreement with the magnetic field spectra measured in the solar wind
The Astrophysical Multipurpose Software Environment
We present the open source Astrophysical Multi-purpose Software Environment
(AMUSE, www.amusecode.org), a component library for performing astrophysical
simulations involving different physical domains and scales. It couples
existing codes within a Python framework based on a communication layer using
MPI. The interfaces are standardized for each domain and their implementation
based on MPI guarantees that the whole framework is well-suited for distributed
computation. It includes facilities for unit handling and data storage.
Currently it includes codes for gravitational dynamics, stellar evolution,
hydrodynamics and radiative transfer. Within each domain the interfaces to the
codes are as similar as possible. We describe the design and implementation of
AMUSE, as well as the main components and community codes currently supported
and we discuss the code interactions facilitated by the framework.
Additionally, we demonstrate how AMUSE can be used to resolve complex
astrophysical problems by presenting example applications.Comment: 23 pages, 25 figures, accepted for A&
Coupled Kinetic-Fluid Simulations of Ganymede's Magnetosphere and Hybrid Parallelization of the Magnetohydrodynamics Model
The largest moon in the solar system, Ganymede, is the only moon known to possess a strong intrinsic magnetic field.
The interaction between the Jovian plasma and Ganymede's magnetic field creates a mini-magnetosphere with periodically varying upstream conditions, which creates a perfect laboratory in nature for studying magnetic reconnection and magnetospheric physics.
Using the latest version of Space Weather Modeling Framework (SWMF), we study the upstream plasma interactions and dynamics in this subsonic, sub-Alfvénic system.
We have developed a coupled fluid-kinetic Hall Magnetohydrodynamics with embedded Particle-in-Cell (MHD-EPIC) model for Ganymede's magnetosphere, with a self-consistently coupled resistive body representing the electrical properties of the moon's interior, improved inner boundary conditions, and high resolution charge and energy conserved PIC scheme.
I reimplemented the boundary condition setup in SWMF for more versatile control and functionalities, and developed a new user module for Ganymede's simulation.
Results from the models are validated with Galileo magnetometer data of all close encounters and compared with Plasma Subsystem (PLS) data.
The energy fluxes associated with the upstream reconnection in the model is estimated to be about 10^-7 W/cm^2, which accounts for about 40% to the total peak auroral emissions observed by the Hubble Space Telescope.
We find that under steady upstream conditions, magnetopause reconnection in our fluid-kinetic simulations occurs in a non-steady manner.
Flux ropes with length of Ganymede's radius form on the magnetopause at a rate about 3/minute and create spatiotemporal variations in plasma and field properties.
Upon reaching proper grid resolutions, the MHD-EPIC model can resolve both electron and ion kinetics at the magnetopause and show localized crescent shape distribution in both ion and electron phase space, non-gyrotropic and non-isotropic behavior inside the diffusion regions.
The estimated global reconnection rate from the models is about 80 kV with 60% efficiency.
There is weak evidence of minute periodicity in the temporal variations of the reconnection rate due to the dynamic reconnection process.
The requirement of high fidelity results promotes the development of hybrid parallelized numerical model strategy and faster data processing techniques.
The state-of-the-art finite volume/difference MHD code Block Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) was originally designed with pure MPI parallelization.
The maximum problem size achievable was limited by the storage requirements of the block tree structure.
To mitigate this limitation, we have added multithreaded OpenMP parallelization to the previous pure MPI implementation.
We opt to use a coarse-grained approach by making the loops over grid blocks multithreaded and have succeeded in making BATS-R-US an efficient hybrid parallel code with modest changes in the source code while preserving the performance.
Good weak scalings up to 50,0000 and 25,0000 of cores are achieved for the explicit and implicit time stepping schemes, respectively.
This parallelization strategy greatly extends the possible simulation scale by an order of magnitude, and paves the way for future GPU-portable code development.
To improve visualization and data processing, I have developed a whole new data processing workflow with the Julia programming language for efficient data analysis and visualization.
As a summary,
1. I build a single fluid Hall MHD-EPIC model of Ganymede's magnetosphere;
2. I did detailed analysis of the upstream reconnection;
3. I developed a MPI+OpenMP parallel MHD model with BATSRUS;
4. I wrote a package for data analysis and visualization.PHDClimate and Space Sciences and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163032/1/hyzhou_1.pd
A GPU-Accelerated Modern Fortran Version of the ECHO Code for Relativistic Magnetohydrodynamics
The numerical study of relativistic magnetohydrodynamics (MHD) plays a
crucial role in high-energy astrophysics, but unfortunately is computationally
demanding, given the complex physics involved (high Lorentz factor flows,
extreme magnetization, curved spacetimes near compact objects) and the large
variety of spatial scales needed to resolve turbulent motions. A great benefit
comes from the porting of existing codes running on standard processors to
GPU-based platforms. However, this usually requires a drastic rewriting of the
original code, the use of specific languages like CUDA, and a complex analysis
of data management and optimization of parallel processes. Here we describe the
porting of the ECHO code for special and general relativistic MHD to
accelerated devices, simply based on native Fortran language built-in
constructs, especially 'do concurrent' loops, few OpenACC directives, and the
straightforward data management provided by the Unified Memory option of NVIDIA
compilers.Thanks to these very minor modifications to the original code, the
new version of ECHO runs at least 16 times faster on GPU platforms compared to
CPU-based ones. The chosen benchmark is the 3D propagation of a relativistic
MHD Alfv\'en wave, for which strong and weak scaling tests performed on the
LEONARDO pre-exascale supercomputer at CINECA are provided (using up to 256
nodes corresponding to 1024 GPUs, and over 14 billion cells). Finally, an
example of high-resolution relativistic MHD Alfv\'enic turbulence simulation is
shown, demonstrating the potential for astrophysical plasmas of the new
GPU-based version of ECHO.Comment: Accepted for publication on Fluids, MDPI, 17 page
HYPERS simulations of solar wind interactions with the Earth's magnetosphere and the Moon
The hybrid simulations, where the ions are treated kinetically and the electrons as a fluid, seek to describe ion microphysics with maximum physical fidelity. The hybrid approach addresses the fundamental need for space plasma models to incorporate physics beyond magnetohydrodynamics. Global hybrid simulations must account for a wide range of both kinetic ion and whistler/Alfvén wave spatio-temporal scales in strongly inhomogeneous plasmas. We present results from two three-dimensional hybrid simulations performed with a novel asynchronous code, HYPERS designed to overcome computational bottlenecks that typically arise in such multiscale simulations. First, we demonstrate an excellent match between simulated lunar wake profiles and observations. We also compare our simulations with two other simulations performed with conventional (time-stepped) hybrid codes. Second, we investigate the interaction of the solar wind with the Earth's dayside magnetosphere under conditions when the orientation of the interplanetary magnetic field is quasi-radial. In this high-resolution simulation we highlight three-dimensional properties of foreshock perturbations formed by the backstreaming ions
Simulating the Common Envelope Phase Using Moving-Mesh Hydrodynamics
Common envelope evolution (CEE) is a phase in the evolution of a binary system where a giant star and a smaller companion share a gaseous envelope, and is responsible for the formation of many systems of astrophysical interest. Despite its importance, CEE is not well understood due to the diverse physics involved. Astronomers have roughly modeled CEE using conserved quantities such as energy, but progress has been limited by uncertainties in the contributions of various energy sources. Thus, 3-D numerical simulations must be brought to bear. Here two methodologies are commonly employed, each of which comes with its own set of advantages: smoothed-particle hydrodynamics and Eulerian grid codes. A hybrid of these methods known as the moving-mesh code has been developed in an attempt to capture the best characteristics of each. We use the moving-mesh solver MANGA, which has recently been improved with the inclusion of physics modules relevant to CEE.
We begin this work with an introduction to CEE in Chapter 1. We go through a step-by-step description of its four stages and summarize observations of transients that are thought to result from binary interactions. We then present an overview of simulation techniques in Chapter 2, showing how aspects of smoothed-particle hydrodynamics and Eulerian methods are implemented into moving-mesh schemes. We begin our numerical studies of CEE using MANGA in Chapter 3 and show that the ejection of the envelope is aided by the inclusion of hydrogen recombination and tidal forces.
CEE simulations to date have neglected hydrodynamic interactions at the surface of the companion. As such, we discuss our development of moving boundary conditions in Chapter 4 and show how they can be used to model the companion object. We show that the orbital eccentricity is affected by the size of the companion through hydrodynamic torques. Finally, we describe our implementation of magnetohydrodynamics in Chapter 5. We find rapid amplification of a toroidal magnetic field at the onset of CEE, which is thought to assist in the formation of nebulae
Recommended from our members
Gaussian Process Modeling for Upsampling Algorithms With Applications in Computer Vision and Computational Fluid Dynamics
Across a variety of fields, interpolation algorithms have been used to upsample lowresolution or coarse data fields. In this work, novel Gaussian Process based methodsare employed to solve a variety of upsampling problems. Specifically threeapplications are explored: coarse data prolongation in Adaptive Mesh Refinement(AMR) in the field of Computational Fluid Dynamics, accurate document imageupsampling to enhance Optical Character Recognition (OCR) accuracy, and fastand accurate Single Image Super Resolution (SISR). For AMR, a new, efficient,and “3rd order accurate” algorithm called GP-AMR is presented. Next, a novel,non-zero mean, windowed GP model is generated to upsample low resolution documentimages to generate a higher OCR accuracy, when compared to the industrystandard. Finally, a hybrid GP convolutional neural network algorithm is used togenerate a computationally efficient and high quality SISR model
- …