16 research outputs found
On Validating an Astrophysical Simulation Code
We present a case study of validating an astrophysical simulation code. Our
study focuses on validating FLASH, a parallel, adaptive-mesh hydrodynamics code
for studying the compressible, reactive flows found in many astrophysical
environments. We describe the astrophysics problems of interest and the
challenges associated with simulating these problems. We describe methodology
and discuss solutions to difficulties encountered in verification and
validation. We describe verification tests regularly administered to the code,
present the results of new verification tests, and outline a method for testing
general equations of state. We present the results of two validation tests in
which we compared simulations to experimental data. The first is of a
laser-driven shock propagating through a multi-layer target, a configuration
subject to both Rayleigh-Taylor and Richtmyer-Meshkov instabilities. The second
test is a classic Rayleigh-Taylor instability, where a heavy fluid is supported
against the force of gravity by a light fluid. Our simulations of the
multi-layer target experiments showed good agreement with the experimental
results, but our simulations of the Rayleigh-Taylor instability did not agree
well with the experimental results. We discuss our findings and present results
of additional simulations undertaken to further investigate the Rayleigh-Taylor
instability.Comment: 76 pages, 26 figures (3 color), Accepted for publication in the ApJ
Equilibrium configurations of two charged masses in General Relativity
An asymptotically flat static solution of Einstein-Maxwell equations which
describes the field of two non-extreme Reissner - Nordstr\"om sources in
equilibrium is presented. It is expressed in terms of physical parameters of
the sources (their masses, charges and separating distance). Very simple
analytical forms were found for the solution as well as for the equilibrium
condition which guarantees the absence of any struts on the symmetry axis. This
condition shows that the equilibrium is not possible for two black holes or for
two naked singularities. However, in the case when one of the sources is a
black hole and another one is a naked singularity, the equilibrium is possible
at some distance separating the sources. It is interesting that for
appropriately chosen parameters even a Schwarzschild black hole together with a
naked singularity can be "suspended" freely in the superposition of their
fields.Comment: 4 pages; accepted for publication in Phys. Rev.
Sensitivity and uncertainty quantification techniques applied to systems of conservation laws
Uncertainty quantification techniques are increasingly important in the interpretation of data and numerical simulations. Such
techniques are typically employed either on data with poorly characterized underlying dynamics or on values from highly
idealized model evaluations. We examine the application of these techniques to an intermediate case, in which data are generated
from coupled, nonlinear partial differential equationsÂżconservation lawsÂżthat admit discontinuous solutions. The values we
analyze are generated from the numerical solution of the PDEs, in which we systematically vary both (i) fundamental modeling
parameters and (ii) the underlying numerical algorithms. A number of sensitivity tests will be performed in order to assess the
relative importance of such different types of uncertainty and we draw preliminary conclusions and speculate on the implications
for more complex simulations.JRC.DG.G.3-Econometrics and applied statistic
Sensitivity analysis techniques applied to a system of hyperbolic conservation laws
Sensitivity analysis is comprised of techniques to quantify the effects of the input variables on a set of outputs. In particular, sensitivity indices can be used to infer which input parameters most significantly affect the results of a computational model. With continually increasing computing power, sensitivity analysis has become an important technique by which to understand the behavior of large-scale computer simulations. Many sensitivity analysis methods rely on sampling from distributions of the inputs. Such sampling-based methods can be computationally expensive, requiring many evaluations of the simulation; in this case, the Sobol’ method provides an easy and accurate way to compute variance-based measures, provided a sufficient number of model evaluations are available. As an alternative, meta-modeling approaches have been devised to approximate the response surface and estimate various measures of sensitivity. In this work, we consider a variety of sensitivity analysis methods, including different sampling strategies, different meta-models, and different ways of evaluating variance-based sensitivity indices. The problem we consider is the 1-D Riemann problem. By a careful choice of inputs, discontinuous solutions are obtained, leading to discontinuous response surfaces; such surfaces can be particularly problematic for meta-modeling approaches. The goal of this study is to compare the estimated sensitivity indices with exact values and to evaluate the convergence of these estimates with increasing samples sizes and under an increasing number of meta-model evaluationsJRC.G.3-Econometrics and applied statistic
Recommended from our members
Enhanced verification test suite for physics simulation codes
This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations