242 research outputs found
An Application of Gaussian Process Modeling for High-order Accurate Adaptive Mesh Refinement Prolongation
We present a new polynomial-free prolongation scheme for Adaptive Mesh
Refinement (AMR) simulations of compressible and incompressible computational
fluid dynamics. The new method is constructed using a multi-dimensional
kernel-based Gaussian Process (GP) prolongation model. The formulation for this
scheme was inspired by the GP methods introduced by A. Reyes et al. (A New
Class of High-Order Methods for Fluid Dynamics Simulation using Gaussian
Process Modeling, Journal of Scientific Computing, 76 (2017), 443-480; A
variable high-order shock-capturing finite difference method with GP-WENO,
Journal of Computational Physics, 381 (2019), 189-217). In this paper, we
extend the previous GP interpolations and reconstructions to a new GP-based AMR
prolongation method that delivers a high-order accurate prolongation of data
from coarse to fine grids on AMR grid hierarchies. In compressible flow
simulations special care is necessary to handle shocks and discontinuities in a
stable manner. To meet this, we utilize the shock handling strategy using the
GP-based smoothness indicators developed in the previous GP work by A. Reyes et
al. We demonstrate the efficacy of the GP-AMR method in a series of testsuite
problems using the AMReX library, in which the GP-AMR method has been
implemented
Numerical Relativity: A review
Computer simulations are enabling researchers to investigate systems which
are extremely difficult to handle analytically. In the particular case of
General Relativity, numerical models have proved extremely valuable for
investigations of strong field scenarios and been crucial to reveal unexpected
phenomena. Considerable efforts are being spent to simulate astrophysically
relevant simulations, understand different aspects of the theory and even
provide insights in the search for a quantum theory of gravity. In the present
article I review the present status of the field of Numerical Relativity,
describe the techniques most commonly used and discuss open problems and (some)
future prospects.Comment: 2 References added; 1 corrected. 67 pages. To appear in Classical and
Quantum Gravity. (uses iopart.cls
A multiresolution space-time adaptive scheme for the bidomain model in electrocardiology
This work deals with the numerical solution of the monodomain and bidomain
models of electrical activity of myocardial tissue. The bidomain model is a
system consisting of a possibly degenerate parabolic PDE coupled with an
elliptic PDE for the transmembrane and extracellular potentials, respectively.
This system of two scalar PDEs is supplemented by a time-dependent ODE modeling
the evolution of the so-called gating variable. In the simpler sub-case of the
monodomain model, the elliptic PDE reduces to an algebraic equation. Two simple
models for the membrane and ionic currents are considered, the
Mitchell-Schaeffer model and the simpler FitzHugh-Nagumo model. Since typical
solutions of the bidomain and monodomain models exhibit wavefronts with steep
gradients, we propose a finite volume scheme enriched by a fully adaptive
multiresolution method, whose basic purpose is to concentrate computational
effort on zones of strong variation of the solution. Time adaptivity is
achieved by two alternative devices, namely locally varying time stepping and a
Runge-Kutta-Fehlberg-type adaptive time integration. A series of numerical
examples demonstrates thatthese methods are efficient and sufficiently accurate
to simulate the electrical activity in myocardial tissue with affordable
effort. In addition, an optimalthreshold for discarding non-significant
information in the multiresolution representation of the solution is derived,
and the numerical efficiency and accuracy of the method is measured in terms of
CPU time speed-up, memory compression, and errors in different norms.Comment: 25 pages, 41 figure
Task-based adaptive multiresolution for time-space multi-scale reaction-diffusion systems on multi-core architectures
A new solver featuring time-space adaptation and error control has been
recently introduced to tackle the numerical solution of stiff
reaction-diffusion systems. Based on operator splitting, finite volume adaptive
multiresolution and high order time integrators with specific stability
properties for each operator, this strategy yields high computational
efficiency for large multidimensional computations on standard architectures
such as powerful workstations. However, the data structure of the original
implementation, based on trees of pointers, provides limited opportunities for
efficiency enhancements, while posing serious challenges in terms of parallel
programming and load balancing. The present contribution proposes a new
implementation of the whole set of numerical methods including Radau5 and
ROCK4, relying on a fully different data structure together with the use of a
specific library, TBB, for shared-memory, task-based parallelism with
work-stealing. The performance of our implementation is assessed in a series of
test-cases of increasing difficulty in two and three dimensions on multi-core
and many-core architectures, demonstrating high scalability
Recommended from our members
Gaussian Process Modeling for Upsampling Algorithms With Applications in Computer Vision and Computational Fluid Dynamics
Across a variety of fields, interpolation algorithms have been used to upsample lowresolution or coarse data fields. In this work, novel Gaussian Process based methodsare employed to solve a variety of upsampling problems. Specifically threeapplications are explored: coarse data prolongation in Adaptive Mesh Refinement(AMR) in the field of Computational Fluid Dynamics, accurate document imageupsampling to enhance Optical Character Recognition (OCR) accuracy, and fastand accurate Single Image Super Resolution (SISR). For AMR, a new, efficient,and “3rd order accurate” algorithm called GP-AMR is presented. Next, a novel,non-zero mean, windowed GP model is generated to upsample low resolution documentimages to generate a higher OCR accuracy, when compared to the industrystandard. Finally, a hybrid GP convolutional neural network algorithm is used togenerate a computationally efficient and high quality SISR model
Scalable parallel simulation of variably saturated flow
In this thesis we develop highly accurate simulation tools for variably saturated flow through porous media able to take advantage of the latest supercomputing resources. Hence, we aim for parallel scalability to very large compute resources of over 105 CPU cores. Our starting point is the parallel subsurface flow simulator ParFlow. This library is of widespread use in the hydrology community and known to have excellent parallel scalability up to 16k processes. We first investigate the numerical tools this library implements in order to perform the simulations it was designed for. ParFlow solves the governing equation for subsurface flow with a cell centered finite difference (FD) method. The code targets high performance computing (HPC) systems by means of distributed memory parallelism. We propose to reorganize ParFlow's mesh subsystem by using fast partitioning algorithms provided by the parallel adaptive mesh refinement (AMR) library p4est. We realize this in a minimally invasive manner by modifying selected parts of the code to reinterpret the existing mesh data structures. Furthermore, we evaluate the scaling performance of the modified version of ParFlow, demonstrating excellent weak and strong scaling up to 458k cores of the Juqueen supercomputer at the JĂĽlich Supercomputing Centre. The above mentioned results were obtained for uniform meshes and hence without explicitly exploiting the AMR capabilities of the p4est library. A natural extension of our work is to activate such functionality and make ParFlow a true AMR application. Enabling ParFlow to use AMR is challenging for several reasons: It may be based on assumptions on the parallel partition that cannot be maintained with AMR, it may use mesh-related metadata that is replicated on all CPUs, and it may assume uniform meshes in the construction of mathematical operators. Additionally, the use of locally refined meshes will certainly change the spectral properties of these operators. In this work, we develop an algorithmic approach to activate the usage of locally refined grids in ParFlow. AMR allows meshes where elements of different size neighbor each other. In this case, ParFlow may incur erroneous results when it attempts to communicate data between inter-element boundaries. We propose and discuss two solutions to this issue operating at two different levels: The first manipulates the indices of the degrees of freedom, While the second operates directly on the degrees of freedom. Both approaches aim to introduce minimal changes to the original ParFlow code. In an AMR framework, the FD method taken by ParFlow will require modifications to correctly deal with different size elements. Mixed finite elements (MFE) are on the other hand better suited for the usage of AMR. It is known that the cell centered FD method used in ParFlow might be reinterpreted as a MFE discretization using Raviart-Thomas elements of lower order. We conclude this thesis presenting a block preconditioner for saddle point problems arising from a MFE on locally refined meshes. We evaluate its robustness with respect to various classes of coefficients for uniform and locally refined meshes
Accurate macroscale modelling of spatial dynamics in multiple dimensions
Developments in dynamical systems theory provides new support for the
macroscale modelling of pdes and other microscale systems such as Lattice
Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically
resolving subgrid microscale dynamics the dynamical systems approach constructs
accurate closures of macroscale discretisations of the microscale system. Here
we specifically explore reaction-diffusion problems in two spatial dimensions
as a prototype of generic systems in multiple dimensions. Our approach unifies
into one the modelling of systems by a type of finite elements, and the
`equation free' macroscale modelling of microscale simulators efficiently
executing only on small patches of the spatial domain. Centre manifold theory
ensures that a closed model exist on the macroscale grid, is emergent, and is
systematically approximated. Dividing space either into overlapping finite
elements or into spatially separated small patches, the specially crafted
inter-element/patch coupling also ensures that the constructed discretisations
are consistent with the microscale system/PDE to as high an order as desired.
Computer algebra handles the considerable algebraic details as seen in the
specific application to the Ginzburg--Landau PDE. However, higher order models
in multiple dimensions require a mixed numerical and algebraic approach that is
also developed. The modelling here may be straightforwardly adapted to a wide
class of reaction-diffusion PDEs and lattice equations in multiple space
dimensions. When applied to patches of microscopic simulations our coupling
conditions promise efficient macroscale simulation.Comment: some figures with 3D interaction when viewed in Acrobat Reader. arXiv
admin note: substantial text overlap with arXiv:0904.085
A novel numerical framework for simulation of multiscale spatio-temporally non-linear systems in additive manufacturing processes.
New computationally efficient numerical techniques have been formulated for multi-scale analysis in order to bridge mesoscopic and macroscopic scales of thermal and mechanical responses of a material. These numerical techniques will reduce computational efforts required to simulate metal based Additive Manufacturing (AM) processes. Considering the availability of physics based constitutive models for response at mesoscopic scales, these techniques will help in the evaluation of the thermal response and mechanical properties during layer-by-layer processing in AM. Two classes of numerical techniques have been explored. The first class of numerical techniques has been developed for evaluating the periodic spatiotemporal thermal response involving multiple time and spatial scales at the continuum level. The second class of numerical techniques is targeted at modeling multi-scale multi-energy dissipative phenomena during the solid state Ultrasonic Consolidation process. This includes bridging the mesoscopic response of a crystal plasticity finite element framework at inter- and intragranular scales and a point at the macroscopic scale. This response has been used to develop an energy dissipative constitutive model for a multi-surface interface at the macroscopic scale. An adaptive dynamic meshing strategy as a part of first class of numerical techniques has been developed which reduces computational cost by efficient node element renumbering and assembly of stiffness matrices. This strategy has been able to reduce the computational cost for solving thermal simulation of Selective Laser Melting process by ~100 times. This method is not limited to SLM processes and can be extended to any other fusion based additive manufacturing process and more generally to any moving energy source finite element problem. Novel FEM based beam theories have been formulated which are more general in nature compared to traditional beam theories for solid deformation. These theories have been the first to simulate thermal problems similar to a solid beam analysis approach. These are more general in nature and are capable of simulating general cross-section beams with an ability to match results for complete three dimensional analysis. In addition to this, a traditional Cholesky decomposition algorithm has been modified to reduce the computational cost of solving simultaneous equations involved in FEM simulations. Solid state processes have been simulated with crystal plasticity based nonlinear finite element algorithms. This algorithm has been further sped up by introduction of an interfacial contact constitutive model formulation. This framework has been supported by a novel methodology to solve contact problems without additional computational overhead to incorporate constraint equations averting the usage of penalty springs
- …