662 research outputs found
Methodology for computational fluid dynamics code verification /validation
The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations
Estimation of Uncertainties for a Supersonic Retro-Propulsion Model Validation Experiment in a Wind Tunnel
A high-quality model validation experiment was performed in the NASA Langley Research Center Unitary Plan Wind Tunnel to assess the predictive accuracy of computational fluid dynamics (CFD) models for a blunt-body supersonic retro-propulsion configuration at Mach numbers from 2.4 to 4.6. Static and fluctuating surface pressure data were acquired on a 5-inch-diameter test article with a forebody composed of a spherically-blunted, 70-degree half-angle cone and a cylindrical aft body. One non-powered configuration with a smooth outer mold line was tested as well as three different powered, forward-firing nozzle configurations: a centerline nozzle, three nozzles equally spaced around the forebody, and a combination with all four nozzles. A key objective of the experiment was the determination of experimental uncertainties from a range of sources such as random measurement error, flowfield non-uniformity, and model/instrumentation asymmetries. This paper discusses the design of the experiment towards capturing these uncertainties for the baseline non-powered configuration, the methodology utilized in quantifying the various sources of uncertainty, and examples of the uncertainties applied to non-powered and powered experimental results. The analysis showed that flowfield nonuniformity was the dominant contributor to the overall uncertainty a finding in agreement with other experiments that have quantified various sources of uncertainty
Recommended from our members
Verification and validation benchmarks.
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest
Semantic representation of reported measurements in radiology
Background
In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time.
Methods
We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect.
Results
The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation.
Conclusions
The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements
Stable Generalized Finite Element Method (SGFEM)
The Generalized Finite Element Method (GFEM) is a Partition of Unity Method
(PUM), where the trial space of standard Finite Element Method (FEM) is
augmented with non-polynomial shape functions with compact support. These shape
functions, which are also known as the enrichments, mimic the local behavior of
the unknown solution of the underlying variational problem. GFEM has been
successfully used to solve a variety of problems with complicated features and
microstructure. However, the stiffness matrix of GFEM is badly conditioned
(much worse compared to the standard FEM) and there could be a severe loss of
accuracy in the computed solution of the associated linear system. In this
paper, we address this issue and propose a modification of the GFEM, referred
to as the Stable GFEM (SGFEM). We show that the conditioning of the stiffness
matrix of SGFEM is not worse than that of the standard FEM. Moreover, SGFEM is
very robust with respect to the parameters of the enrichments. We show these
features of SGFEM on several examples.Comment: 51 pages, 4 figure
Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction
A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses
Recommended from our members
On the role of code comparisons in verification and validation.
This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes
Recommended from our members
A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost
- …