59,761 research outputs found
Closed-Loop Statistical Verification of Stochastic Nonlinear Systems Subject to Parametric Uncertainties
This paper proposes a statistical verification framework using Gaussian
processes (GPs) for simulation-based verification of stochastic nonlinear
systems with parametric uncertainties. Given a small number of stochastic
simulations, the proposed framework constructs a GP regression model and
predicts the system's performance over the entire set of possible
uncertainties. Included in the framework is a new metric to estimate the
confidence in those predictions based on the variance of the GP's cumulative
distribution function. This variance-based metric forms the basis of active
sampling algorithms that aim to minimize prediction error through careful
selection of simulations. In three case studies, the new active sampling
algorithms demonstrate up to a 35% improvement in prediction error over other
approaches and are able to correctly identify regions with low prediction
confidence through the variance metric.Comment: 8 pages, submitted to ACC 201
1992 NASA Life Support Systems Analysis workshop
The 1992 Life Support Systems Analysis Workshop was sponsored by NASA's Office of Aeronautics and Space Technology (OAST) to integrate the inputs from, disseminate information to, and foster communication among NASA, industry, and academic specialists. The workshop continued discussion and definition of key issues identified in the 1991 workshop, including: (1) modeling and experimental validation; (2) definition of systems analysis evaluation criteria; (3) integration of modeling at multiple levels; and (4) assessment of process control modeling approaches. Through both the 1991 and 1992 workshops, NASA has continued to seek input from industry and university chemical process modeling and analysis experts, and to introduce and apply new systems analysis approaches to life support systems. The workshop included technical presentations, discussions, and interactive planning, with sufficient time allocated for discussion of both technology status and technology development recommendations. Key personnel currently involved with life support technology developments from NASA, industry, and academia provided input to the status and priorities of current and future systems analysis methods and requirements
On Validating an Astrophysical Simulation Code
We present a case study of validating an astrophysical simulation code. Our
study focuses on validating FLASH, a parallel, adaptive-mesh hydrodynamics code
for studying the compressible, reactive flows found in many astrophysical
environments. We describe the astrophysics problems of interest and the
challenges associated with simulating these problems. We describe methodology
and discuss solutions to difficulties encountered in verification and
validation. We describe verification tests regularly administered to the code,
present the results of new verification tests, and outline a method for testing
general equations of state. We present the results of two validation tests in
which we compared simulations to experimental data. The first is of a
laser-driven shock propagating through a multi-layer target, a configuration
subject to both Rayleigh-Taylor and Richtmyer-Meshkov instabilities. The second
test is a classic Rayleigh-Taylor instability, where a heavy fluid is supported
against the force of gravity by a light fluid. Our simulations of the
multi-layer target experiments showed good agreement with the experimental
results, but our simulations of the Rayleigh-Taylor instability did not agree
well with the experimental results. We discuss our findings and present results
of additional simulations undertaken to further investigate the Rayleigh-Taylor
instability.Comment: 76 pages, 26 figures (3 color), Accepted for publication in the ApJ
Validating Predictions of Unobserved Quantities
The ultimate purpose of most computational models is to make predictions,
commonly in support of some decision-making process (e.g., for design or
operation of some system). The quantities that need to be predicted (the
quantities of interest or QoIs) are generally not experimentally observable
before the prediction, since otherwise no prediction would be needed. Assessing
the validity of such extrapolative predictions, which is critical to informed
decision-making, is challenging. In classical approaches to validation, model
outputs for observed quantities are compared to observations to determine if
they are consistent. By itself, this consistency only ensures that the model
can predict the observed quantities under the conditions of the observations.
This limitation dramatically reduces the utility of the validation effort for
decision making because it implies nothing about predictions of unobserved QoIs
or for scenarios outside of the range of observations. However, there is no
agreement in the scientific community today regarding best practices for
validation of extrapolative predictions made using computational models. The
purpose of this paper is to propose and explore a validation and predictive
assessment process that supports extrapolative predictions for models with
known sources of error. The process includes stochastic modeling, calibration,
validation, and predictive assessment phases where representations of known
sources of uncertainty and error are built, informed, and tested. The proposed
methodology is applied to an illustrative extrapolation problem involving a
misspecified nonlinear oscillator
- …