2 research outputs found

    Building Fully Adaptive Stochastic Models for Multiphase Blowdown Simulations.

    Full text link
    A new method for uncertainty quantification (UQ) that combines adaptivity in the physical (deterministic) space and the stochastic space is presented. The sampling-based method adaptively refines the physical discretizations of the simulations, along with adaptively building a stochastic model and adding samples. By adaptively refining the physical and stochastic models, errors from both spaces are balanced. The UQ method takes advantage of an active linear subspace to reduce the dimensionality of the stochastic space while retaining relevant interaction terms and anisotropy. Driven by low-cost error estimates, a particle-swarm optimization method explores the stochastic space and drives adaptation that results in an efficient stochastic approximation. The UQ method is compared to to two modern methods for three test functions in a 100-dimensional space. The current method is shown to result in up to three orders of magnitude lower error and up to two orders of magnitude fewer samples. Next, a simulation is developed using the discontinuous Galerkin method which is well-suited to adaptivity. A transient multiphase flashing flow model is used to simulate the Edwards-O'Brien blowdown problem. The adjoint equations are successfully solved and used to drive a space-time anisotropic adaptation based on a complex output of interest. This results in an efficient phsyical discretization. Finally, the UQ method is used to assess modeling and discretization errors in a modified multiphase flow simulation. Based on an overall stochastic output of interest, the UQ method simultaneously drives adaptation of the stochastic and deterministic discretizations in order to balance the two sources of error. That is, terms are added to the stochastic model, samples are added, and the physical grid of each individual simulation is refined simultaneously. Error estimates based on semi-refined discretizations retain anisotropic accuracy, and a common grid is used to compare solutions from samples. The method for combined adaptivity performs well on the test problem, reducing the stochastic dimensionality from 20 to two and reducing deterministic errors on select samples. For about the same computational time, the method results in an order of magnitude less error and an order of magnitude fewer degrees of freedom compared to three other methods.PhDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111612/1/isaaca_1.pd

    A Ridgelet Kernel Regression Model using Genetic Algorithm

    No full text
    Abstract. In this paper, a ridgelet kernel regression model is proposed for approximation of high dimensional functions. It is based on ridgelet theory, kernel and regularization technology from which we can deduce a regularized kernel regression form. Taking the objective function solved by quadratic programming to define the fitness function, we use genetic algorithm to search for the optimal directional vector of ridgelet. The results indicate that this method can effectively deal with high dimensional data, especially those with certain kinds of spatial inhomogeneities. Some illustrative examples are included to demonstrate its superiority.
    corecore