230 research outputs found
Recommended from our members
Camino Real: A Directing Process in Sixteen Blocks
An analysis of the author's experience directing the Tennessee Williams play Camino Real
Recommended from our members
ASC Predictive Science Academic Alliance Program Verification and Validation Whitepaper
The purpose of this whitepaper is to provide a framework for understanding the role that verification and validation (V&V) are expected to play in successful ASC Predictive Science Academic Alliance (PSAA) Centers and projects. V&V have been emphasized in the recent specification of the PSAA (NNSA, 2006): (1) The resulting simulation models lend themselves to practical verification and validation methodologies and strategies that should include the integrated use of experimental and/or observational data as a key part of model and sub-model validation, as well as demonstrations of numerical convergence and accuracy for code verification. (2) Verification, validation and prediction methodologies and results must be much more strongly emphasized as research topics and demonstrated via the proposed simulations. (3) It is mandatory that proposals address the following two topics: (a) Predictability in science & engineering; and (b) Verification & validation strategies for large-scale simulations, including quantification of uncertainty and numerical convergence. We especially call attention to the explicit coupling of computational predictability and V&V in the third bullet above. In this whitepaper we emphasize this coupling, and provide concentrated guidance for addressing item 2. The whitepaper has two main components. First, we provide a brief and high-level tutorial on V&V that emphasizes critical elements of the program. Second, we state a set of V&V-related requirements that successful PSAA proposals must address
Recommended from our members
Verification and validation benchmarks.
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest
Recommended from our members
The role of dynamic experimentation for computation analysis
In this paper a brief description of dynamic techniques commonly available for determining material property studies is presented. For many impact applications, the material generally experiences a complex loading path. In most cases, the initial loading conditions can be represented by the shock commonly referred to as the Hugoniot state. Subsequent loading or release structure, i.e., off-Hugoniot states would however be dependent on the physical processes dominating the material behavior. The credibility of the material model is tested by the accuracy of predictions of off-Hugoniot states. Experimental techniques commonly used to determine off-Hugoniot states are discussed in this survey
Axial focusing of impact energy in the Earth's interior: Proof-of-principle tests of a new hypothesis
A causal link between major impact events and global processes would probably require a significant change in the thermal state of the Earth's interior, presumably brought about by coupling of impact energy. One possible mechanism for such energy coupling from the surface to the deep interior would be through focusing due to axial symmetry. Antipodal focusing of surface and body waves from earthquakes is a well-known phenomenon which has previously been exploited by seismologists in studies of the Earth's deep interior. Antipodal focusing from impacts on the Moon, Mercury, and icy satellites has also been invoked by planetary scientists to explain unusual surface features opposite some of the large impact structures on these bodies. For example, 'disrupted' terrains have been observed antipodal to the Caloris impact basis on Mercury and Imbrium Basin on the Moon. Very recently there have been speculations that antipodal focusing of impact energy within the mantle may lead to flood basalt and hotspot activity, but there has not yet been an attempt at a rigorous model. A new hypothesis was proposed and preliminary proof-of-principle tests for the coupling of energy from major impacts to the mantle by axial focusing of seismic waves was performed. Because of the axial symmetry of the explosive source, the phases and amplitudes are dependent only on ray parameter (or takeoff angle) and are independent of azimuthal angle. For a symmetric and homogeneous Earth, all the seismic energy radiated by the impact at a given takeoff angle will be refocused (minus attenuation) on the axis of symmetry, regardless of the number of reflections and refractions it has experienced. Mantle material near the axis of symmetry will experience more strain cycles with much greater amplitude than elsewhere and will therefore experience more irreversible heating. The situation is very different than for a giant earthquake, which in addition to having less energy, has an asymmetric focal mechanism and a larger area. Two independent proof-of-principle approaches were used. The first makes use of seismic simulations, which are being performed with a realistic Earth model to determine the degree of focusing along the axis and to estimate the volume of material, if any, that experiences significant irreversible heating. The second involves two-dimensional hydrodynamic code simulations to determine the stress history, internal energy, and temperature rise as a function of radius along the axis
Ideas underlying quantification of margins and uncertainties(QMU): a white paper.
This report describes key ideas underlying the application of Quantification of Margins and Uncertainties (QMU) to nuclear weapons stockpile lifecycle decisions at Sandia National Laboratories. While QMU is a broad process and methodology for generating critical technical information to be used in stockpile management, this paper emphasizes one component, which is information produced by computational modeling and simulation. In particular, we discuss the key principles of developing QMU information in the form of Best Estimate Plus Uncertainty, the need to separate aleatory and epistemic uncertainty in QMU, and the risk-informed decision making that is best suited for decisive application of QMU. The paper is written at a high level, but provides a systematic bibliography of useful papers for the interested reader to deepen their understanding of these ideas
Recommended from our members
On the role of code comparisons in verification and validation.
This report presents a perspective on the role of code comparison activities in verification and validation. We formally define the act of code comparison as the Code Comparison Principle (CCP) and investigate its application in both verification and validation. One of our primary conclusions is that the use of code comparisons for validation is improper and dangerous. We also conclude that while code comparisons may be argued to provide a beneficial component in code verification activities, there are higher quality code verification tasks that should take precedence. Finally, we provide a process for application of the CCP that we believe is minimal for achieving benefit in verification processes
Recommended from our members
ASC Predictive Science Academic Alliance Program Verification and Validation Whitepaper
Recommended from our members
Modeling and simulation technology readiness levels.
This report summarizes the results of an effort to establish a framework for assigning and communicating technology readiness levels (TRLs) for the modeling and simulation (ModSim) capabilities at Sandia National Laboratories. This effort was undertaken as a special assignment for the Weapon Simulation and Computing (WSC) program office led by Art Hale, and lasted from January to September 2006. This report summarizes the results, conclusions, and recommendations, and is intended to help guide the program office in their decisions about the future direction of this work. The work was broken out into several distinct phases, starting with establishing the scope and definition of the assignment. These are characterized in a set of key assertions provided in the body of this report. Fundamentally, the assignment involved establishing an intellectual framework for TRL assignments to Sandia's modeling and simulation capabilities, including the development and testing of a process to conduct the assignments. To that end, we proposed a methodology for both assigning and understanding the TRLs, and outlined some of the restrictions that need to be placed on this process and the expected use of the result. One of the first assumptions we overturned was the notion of a ''static'' TRL--rather we concluded that problem context was essential in any TRL assignment, and that leads to dynamic results (i.e., a ModSim tool's readiness level depends on how it is used, and by whom). While we leveraged the classic TRL results from NASA, DoD, and Sandia's NW program, we came up with a substantially revised version of the TRL definitions, maintaining consistency with the classic level definitions and the Predictive Capability Maturity Model (PCMM) approach. In fact, we substantially leveraged the foundation the PCMM team provided, and augmented that as needed. Given the modeling and simulation TRL definitions and our proposed assignment methodology, we conducted four ''field trials'' to examine how this would work in practice. The results varied substantially, but did indicate that establishing the capability dependencies and making the TRL assignments was manageable and not particularly time consuming. The key differences arose in perceptions of how this information might be used, and what value it would have (opinions ranged from negative to positive value). The use cases and field trial results are included in this report. Taken together, the results suggest that we can make reasonably reliable TRL assignments, but that using those without the context of the information that led to those results (i.e., examining the measures suggested by the PCMM table, and extended for ModSim TRL purposes) produces an oversimplified result--that is, you cannot really boil things down to just a scalar value without losing critical information
2-Imino-3-(2-nitrophenyl)-1,3-thiazolidin-4-one
In the title compound, C9H7N3O3S, the nitro and thiazolidinone moieties are inclined with respect to the aromatic ring at dihedral angles of 9.57 (16) and 78.42 (4)°, respectively. In the crystal, N—H⋯O hydrogen bonding connects the molecules along the c and a axes to form a two-dimensional polymeric network. A weak S⋯O interaction [3.2443 (11) Å] and phenyl ring to phenyl ring off-set π⋯π stacking [with centroid–centroid separation of 3.6890 (7) Å and ring slippage of 1.479 Å] link the polymeric chains along the b and a axes, respectively
- …