32 research outputs found

    Renormalized Reduced Order Models with Memory for Long Time Prediction

    Full text link
    We examine the challenging problem of constructing reduced models for the long time prediction of systems where there is no timescale separation between the resolved and unresolved variables. In previous work we focused on the case where there was only transfer of activity (e.g. energy, mass) from the resolved to the unresolved variables. Here we investigate the much more difficult case where there is two-way transfer of activity between the resolved and unresolved variables. Like in the case of activity drain out of the resolved variables, even if one starts with an exact formalism, like the Mori-Zwanzig (MZ) formalism, the constructed reduced models can become unstable. We show how to remedy this situation by using dynamic information from the full system to renormalize the MZ reduced models. In addition to being stabilized, the renormalized models can be accurate for very long times. We use the Korteweg-de Vries equation to illustrate the approach. The coefficients of the renormalized models exhibit rich structure, including algebraic time dependence and incomplete similarity.Comment: 19 pages plus appendices, four figures, software used to reach results available upon request, approved for release by PNNL (IR number PNNL-SA-127388

    Efficient failure probability calculation through mesh refinement

    Full text link
    We present a novel way of accelerating hybrid surrogate methods for the calculation of failure probabilities. The main idea is to use mesh refinement in order to obtain improved local surrogates of low computation cost to simulate on. These improved surrogates can reduce significantly the required number of evaluations of the exact model (which is the usual bottleneck of failure probability calculations). Meanwhile the effort on evaluations of surrogates is dramatically reduced by utilizing low order local surrogates. Numerical results of the application of the proposed approach in several examples of increasing complexity show the robustness, versatility and gain in efficiency of the method.Comment: 22 page

    A unified framework for mesh refinement in random and physical space

    Full text link
    In recent work we have shown how an accurate reduced model can be utilized to perform mesh refinement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for refinement in both random and physical space. In this manuscript we focus on the application to random space mesh refinement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the efficiency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.Comment: 29 page

    Mori-Zwanzig reduced models for uncertainty quantification

    Full text link
    In many time-dependent problems of practical interest the parameters and/or initial conditions entering the equations describing the evolution of the various quantities exhibit uncertainty. One way to address the problem of how this uncertainty impacts the solution is to expand the solution using polynomial chaos expansions and obtain a system of differential equations for the evolution of the expansion coefficients. We present an application of the Mori-Zwanzig (MZ) formalism to the problem of constructing reduced models of such systems of differential equations. In particular, we construct reduced models for a subset of the polynomial chaos expansion coefficients that are needed for a full description of the uncertainty caused by uncertain parameters or initial conditions. Even though the MZ formalism is exact, its straightforward application to the problem of constructing reduced models for estimating uncertainty involves the computation of memory terms whose cost can become prohibitively expensive. For those cases, we present a Markovian reformulation of the MZ formalism which can lead to approximations that can alleviate some of the computational expense while retaining an accuracy advantage over reduced models that discard the memory altogether. Our results support the conclusion that successful reduced models need to include memory effects.Comment: 29 pages, 13 figures. arXiv admin note: substantial text overlap with arXiv:1212.6360, arXiv:1211.428

    Doing the impossible: Why neural networks can be trained at all

    Full text link
    As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network that enforces higher mutual information between layers speeds training and leads to more accurate results. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights.Comment: The material is based on a poster from the 15th Neural Computation and Psychology Workshop "Contemporary Neural Network Models: Machine Learning, Artificial Intelligence, and Cognition" August 8-9, 2016, Drexel University, Philadelphia, PA, US

    Basis adaptation and domain decomposition for steady partial differential equations with random coefficients

    Full text link
    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.Comment: 26 pages, 13 figure

    Stochastic basis adaptation and spatial domain decomposition for PDEs with random coefficients

    Full text link
    We present a novel uncertainty quantification approach for high-dimensional stochastic partial differential equations that reduces the computational cost of polynomial chaos methods by decomposing the computational domain into non-overlapping subdomains and adapting the stochastic basis in each subdomain so the local solution has a lower dimensional random space representation. The local solutions are coupled using the Neumann-Neumann algorithm, where we first estimate the interface solution then evaluate the interior solution in each subdomain using the interface solution as a boundary condition. The interior solutions in each subdomain are computed independently of each other, which reduces the operation count from O(Nα)O(N^\alpha) to O(Mα),O(M^\alpha), where NN is the total number of degrees of freedom, MM is the number of degrees of freedom in each subdomain, and the exponent α>1\alpha>1 depends on the uncertainty quantification method used. In addition, the localized nature of solutions makes the proposed approach highly parallelizable. We illustrate the accuracy and efficiency of the approach for linear and nonlinear differential equations with random coefficients

    Model reduction for a power grid model

    Full text link
    We apply model reduction techniques to the DeMarco power grid model. The DeMarco model, when augmented by an appropriate line failure mechanism, can be used to study cascade failures. Here we examine the DeMarco model without the line failure mechanism and we investigate how to construct reduced order models for subsets of the state variables. We show that due to the oscillating nature of the solutions and the absence of timescale separation between resolved and unresolved variables, the construction of accurate reduced models becomes highly non-trivial since one has to account for long memory effects. In addition, we show that a reduced model which includes even a short memory is drastically better than a memoryless model.Comment: 27 page

    Renormalization and blow-up for the 3D Euler equations

    Full text link
    In recent work we have developed a renormalization framework for stabilizing reduced order models for time-dependent partial differential equations. We have applied this framework to the open problem of finite-time singularity formation (blow-up) for the 3D Euler equations of incompressible fluid flow. The renormalized coefficients in the reduced order models decay algebraically with time and resolution. Our results for the behavior of the solutions are consistent with the formation of a finite-time singularity

    Improving solution accuracy and convergence for stochastic physics parameterizations with colored noise

    Full text link
    Stochastic parameterizations are used in numerical weather prediction and climate modeling to help capture the uncertainty in the simulations and improve their statistical properties. Convergence issues can arise when time integration methods originally developed for deterministic differential equations are applied naively to stochastic problems. (Hodyss et al 2013, 2014) demonstrated that a correction term to various deterministic numerical schemes, known in stochastic analysis as the It\^o correction, can help improve solution accuracy and ensure convergence to the physically relevant solution without substantial computational overhead. The usual formulation of the It\^o correction is valid only when the stochasticity is represented by {\it white} noise. In this study, a generalized formulation of the It\^o correction is derived for noises of any color. The formulation is applied to a test problem described by an advection-diffusion equation forced with a spectrum of fast processes. We present numerical results for cases with both constant and spatially varying advection velocities to show that, for the same time step sizes, the introduction of the generalized It\^o correction helps to substantially reduce time integration error and significantly improve the convergence rate of the numerical solutions when the forcing term in the governing equation is rough (fast varying); alternatively, for the same target accuracy, the generalized It\^o correction allows for the use of significantly longer time steps and hence helps to reduce the computational cost of the numerical simulation.Comment: 18 pages, 2 figures; v2 includes section rearrangement and added details for the numerical implementation; v3 includes addition of sections, references and one figur
    corecore