445 research outputs found
Enabling High-Dimensional Hierarchical Uncertainty Quantification by ANOVA and Tensor-Train Decomposition
Hierarchical uncertainty quantification can reduce the computational cost of
stochastic circuit simulation by employing spectral methods at different
levels. This paper presents an efficient framework to simulate hierarchically
some challenging stochastic circuits/systems that include high-dimensional
subsystems. Due to the high parameter dimensionality, it is challenging to both
extract surrogate models at the low level of the design hierarchy and to handle
them in the high-level simulation. In this paper, we develop an efficient
ANOVA-based stochastic circuit/MEMS simulator to extract efficiently the
surrogate models at the low level. In order to avoid the curse of
dimensionality, we employ tensor-train decomposition at the high level to
construct the basis functions and Gauss quadrature points. As a demonstration,
we verify our algorithm on a stochastic oscillator with four MEMS capacitors
and 184 random parameters. This challenging example is simulated efficiently by
our simulator at the cost of only 10 minutes in MATLAB on a regular personal
computer.Comment: 14 pages (IEEE double column), 11 figure, accepted by IEEE Trans CAD
of Integrated Circuits and System
Stochastic Testing Simulator for Integrated Circuits and MEMS: Hierarchical and Sparse Techniques
Process variations are a major concern in today's chip design since they can
significantly degrade chip performance. To predict such degradation, existing
circuit and MEMS simulators rely on Monte Carlo algorithms, which are typically
too slow. Therefore, novel fast stochastic simulators are highly desired. This
paper first reviews our recently developed stochastic testing simulator that
can achieve speedup factors of hundreds to thousands over Monte Carlo. Then, we
develop a fast hierarchical stochastic spectral simulator to simulate a complex
circuit or system consisting of several blocks. We further present a fast
simulation approach based on anchored ANOVA (analysis of variance) for some
design problems with many process variations. This approach can reduce the
simulation cost and can identify which variation sources have strong impacts on
the circuit's performance. The simulation results of some circuit and MEMS
examples are reported to show the effectiveness of our simulatorComment: Accepted to IEEE Custom Integrated Circuits Conference in June 2014.
arXiv admin note: text overlap with arXiv:1407.302
Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluids simulation
The polynomial dimensional decomposition (PDD) is employed in this work for theglobal sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to amoderate to large number of input random variables. Due to the intimate structure between thePDD and the Analysis of Variance (ANOVA) approach, PDD is able to provide a simpler and moredirect evaluation of the Sobol’ sensitivity indices, when compared to the Polynomial Chaos expansion(PC). Unfortunately, the number of PDD terms grows exponentially with respect to the sizeof the input random vector, which makes the computational cost of standard methods unaffordablefor real engineering applications. In order to address the problem of the curse of dimensionality, thiswork proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model(i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed byregression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionalityfor ANOVA component functions, 2) the active dimension technique especially for second- andhigher-order parameter interactions, and 3) the stepwise regression approach designed to retainonly the most influential polynomials in the PDD expansion. During this adaptive procedure featuringstepwise regressions, the surrogate model representation keeps containing few terms, so thatthe cost to resolve repeatedly the linear systems of the least-square regression problem is negligible.The size of the finally obtained sparse PDD representation is much smaller than the one of the fullexpansion, since only significant terms are eventually retained. Consequently, a much less numberof calls to the deterministic model is required to compute the final PDD coefficients
A Compressed Sensing Approach to Uncertainty Propagation for Approximately Additive Functions
Computational models for numerically simulating physical systems are increasingly being used to support decision-making processes in engineering. Processes such as design decisions, policy level analyses, and experimental design settings are often guided by information gained from computational modeling capabilities. To ensure effective application of results obtained through numerical simulation of computational models, uncertainty in model inputs must be propagated to uncertainty in model outputs. For expensive computational models, the many thousands of model evaluations required for traditional Monte Carlo based techniques for uncertainty propagation can be prohibitive. This paper presents a novel methodology for constructing surrogate representations of computational models via compressed sensing. Our approach exploits the approximate additivity inherent in many engineering computational modeling capabilities. We demonstrate our methodology on some analytical functions, with comparison to the Gaussian process regression, and a cooled gas turbine blade application. We also provide some possible methods to build uncertainty information for our approach. The results of these applications reveal substantial computational savings over traditional Monte Carlo simulation with negligible loss of accuracy
Level Set Methods for Stochastic Discontinuity Detection in Nonlinear Problems
Stochastic physical problems governed by nonlinear conservation laws are
challenging due to solution discontinuities in stochastic and physical space.
In this paper, we present a level set method to track discontinuities in
stochastic space by solving a Hamilton-Jacobi equation. By introducing a speed
function that vanishes at discontinuities, the iso-zero of the level set
problem coincide with the discontinuities of the conservation law. The level
set problem is solved on a sequence of successively finer grids in stochastic
space. The method is adaptive in the sense that costly evaluations of the
conservation law of interest are only performed in the vicinity of the
discontinuities during the refinement stage. In regions of stochastic space
where the solution is smooth, a surrogate method replaces expensive evaluations
of the conservation law. The proposed method is tested in conjunction with
different sets of localized orthogonal basis functions on simplex elements, as
well as frames based on piecewise polynomials conforming to the level set
function. The performance of the proposed method is compared to existing
adaptive multi-element generalized polynomial chaos methods
Development of reduced polynomial chaos-Kriging metamodel for uncertainty quantification of computational aerodynamics
2018 Summer.Includes bibliographical references.Computational fluid dynamics (CFD) simulations are a critical component of the design and development of aerodynamic bodies. However, as engineers attempt to capture more detailed physics, the computational cost of simulations increases. This limits the ability of engineers to use robust or multidisciplinary design methodologies for practical engineering applications because the computational model is too expensive to evaluate for uncertainty quantification studies and off-design performance analysis. Metamodels (surrogate models) are a closed-form mathematical solution fit to only a few simulation responses which can be used to remedy this situation by estimating off-design performance and stochastic responses of the CFD simulation for far less computational cost. The development of a reduced polynomial chaos-Kriging (RPC-K) metamodel is another step towards eliminating simulation gridlock by capturing the relevant physics of the problem in a cheap-to-evaluate metamodel using fewer CFD simulations. The RPC-K metamodel is superior to existing technologies because its model reduction methodology eliminates the design parameters which contribute little variance to the problem before fitting a high-fidelity metamodel to the remaining data. This metamodel can capture non-linear physics due to its inclusion of both the long-range trend information of a polynomial chaos expansion and local variations in the simulation data through Kriging. In this thesis, the RPC-K metamodel is developed, validated on a convection-diffusion-reaction problem, and applied to the NACA 4412 airfoil and aircraft engine nacelle problems. This research demonstrates the metamodel's effectiveness over existing polynomial chaos and Kriging metamodels for aerodynamics applications because of its ability to fit non-linear fluid flows with far fewer CFD simulations. This research will allow aerospace engineers to more effectively take advantage of detailed CFD simulations in the development of next-generation aerodynamic bodies through the use of the RPC-K metamodel to save computational cost
- …