11,217 research outputs found

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor Design

    Get PDF
    The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail. A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS). The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case. The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation. UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation. LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients. Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest. The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs

    Aircraft System Noise Prediction Uncertainty Quantification for a Hybrid Wing Body Subsonic Transport Concept

    Get PDF
    Aircraft system level noise prediction for advanced, unconventional concepts has undergone significant improvement over the past two decades. The prediction modeling uncertainty must be quantified so that potential benefits of unconventional configurations, which are outside of the range of empirical models, can be reliably assessed. This paper builds on previous work in an effort to improve estimates of element prediction uncertainties where the prediction methodology has been improved, or new experimental validation data are available, to provide an estimate of the system level uncertainty in the prediction process. In general, the uncertainty of the prediction will be strongly dependent on the aircraft configuration as well as which technologies are integrated. While the quantitative uncertainty values contained here are specific to the hybrid wing body design presented, the underlying process is the same regardless of configuration. A refined process for determining the uncertainty for each element of the noise prediction is detailed in this paper. The system level uncertainty in the prediction of the aircraft noise is determined at the three certification points, using a Monte Carlo method. Comparisons with previous work show a reduction of 1 EPNdB in the 95%coverage interval of the cumulative noise level. The largest impediment for continued reduction in uncertainty for the hybrid wing body concept is the need for improved modeling and validation experiments for fan noise, propulsion airframe aeroacoustic effects, and the Krueger flap, which comprise the bulk of the uncertainty in the cumulative certification noise level

    Sensitivity analysis methods for uncertainty budgeting in system design

    Get PDF
    Quantification and management of uncertainty are critical in the design of engineering systems, especially in the early stages of conceptual design. This paper presents an approach to defining budgets on the acceptable levels of uncertainty in design quantities of interest, such as the allowable risk in not meeting a critical design constraint and the allowable deviation in a system performance metric. A sensitivity-based method analyzes the effects of design decisions on satisfying those budgets, and a multi-objective optimization formulation permits the designer to explore the tradespace of uncertainty reduction activities while also accounting for a cost budget. For models that are computationally costly to evaluate, a surrogate modeling approach based on high dimensional model representation (HDMR) achieves efficient computation of the sensitivities. An example problem in aircraft conceptual design illustrates the approach.United States. National Aeronautics and Space Administration. Leading Edge Aeronautics Research Program (Grant NNX14AC73A)United States. Department of Energy. Applied Mathematics Program (Award DE-FG02-08ER2585)United States. Department of Energy. Applied Mathematics Program (Award DE-SC0009297

    IGA-based Multi-Index Stochastic Collocation for random PDEs on arbitrary domains

    Full text link
    This paper proposes an extension of the Multi-Index Stochastic Collocation (MISC) method for forward uncertainty quantification (UQ) problems in computational domains of shape other than a square or cube, by exploiting isogeometric analysis (IGA) techniques. Introducing IGA solvers to the MISC algorithm is very natural since they are tensor-based PDE solvers, which are precisely what is required by the MISC machinery. Moreover, the combination-technique formulation of MISC allows the straight-forward reuse of existing implementations of IGA solvers. We present numerical results to showcase the effectiveness of the proposed approach.Comment: version 3, version after revisio
    • …
    corecore