38 research outputs found

    Adaptive sparse-grid tutorial

    Get PDF

    Reduced model-error source terms for fluid flow

    Get PDF
    It is well known that the wide range of spatial and temporal scales present in geophysical flow problems represents a (currently) insurmountable computational bottleneck, which must be circumvented by a coarse-graining procedure. The effect of the unresolved fluid motions enters the coarse-grained equations as an unclosed forcing term, denoted as the ’eddy forcing’. Traditionally, the system is closed by approximate deterministic closure models, i.e. so-called parameterizations. Instead of creating a deterministic parameterization, some recent efforts have focused on creating a stochastic, data-driven surrogate model for the eddy forcing from a (limited) set of reference data, with the goal of accurately capturing the long-term flow statistics. Since the eddy forcing is a dynamically evolving field, a surrogate should be able to mimic the complex spatial patterns displayed by the eddy forcing. Rather than creating such a (fully data-driven) surrogate, we propose to precede the surrogate construction step by a proce- dure that replaces the eddy forcing with a new model-error source term which: i) is tailor-made to capture spatially-integrated statistics of interest, ii) strikes a balance between physical in- sight and data-driven modelling , and iii) significantly reduces the amount of training data that is needed. Instead of creating a surrogate for an evolving field, we now only require a surrogate model for one scalar time series per statistical quantity-of-interest. Our current surrogate mod- elling approach builds on a resampling strategy, where we create a probability density function of the reduced training data that is conditional on (time-lagged) resolved-scale variables. We derive the model-error source terms, and construct the reduced surrogate using an ocean model of two-dimensional turbulence in a doubly periodic square domain

    Towards data-driven dynamic surrogate models for ocean flow

    Get PDF
    Coarse graining of (geophysical) flow problems is a necessity brought upon us by the wide range of spatial and temporal scales present in these problems, which cannot be all represented on a numerical grid without an inordinate amount of computational resources. Traditionally, the effect of the unresolved eddies is approximated by deterministic closure models, i.e. so-called parameterizations. The effect of the unresolved eddy field enters the resolved-scale equations as a forcing term, denoted as the’eddy forcing’. Instead of creating a deterministic parameterization, our goal is to infer a stochastic, data-driven surrogate model for the eddy forcing from a (limited) set of reference data, with the goal of accurately capturing the long-term flow statistics. Our surrogate modelling approach essentially builds on a resampling strategy, where we create a probability density function of the reference data that is conditional on (time-lagged) resolved-scale variables. The choice of resolved-scale variables, as well as the employed time lag, is essential to the performance of the surrogate. We will demonstrate the effect of different modelling choices on a simplified ocean model of two-dimensional turbulence in a doubly periodic square domain

    On the deep active-subspace method

    Get PDF
    The deep active-subspace method is a neural-network based tool for the propagation of uncertainty through computational models with high-dimensional input spaces. Unlike the original active-subspace method, it does not require access to the gradient of the model. It relies on an orthogonal projection matrix constructed with Gram–Schmidt orthogonalization to reduce the input dimensionality. This matrix is incorporated into a neural network as the weight matrix of the first hidden layer (acting as an orthogonal encoder), and optimized using back propagation to identify the active subspace of the input. We propose several theoretical extensions, starting with a new analytic relation for the derivatives of Gram–Schmidt vectors, which are required for back propagation. We also study the use of vector-valued model outputs, which is difficult in the case of the original active-subspace method. Additionally, we investigate an alternative neural network with an encoder without embedded orthonormality, which shows equally good performance compared to the deep active-subspace method. Two epidemiological models are considered as applications, where one requires supercomputer access to generate the training data

    Reducing data-driven dynamical subgrid scale models by physical constraints

    Get PDF
    Recent years have seen a growing interest in using data-driven (machine-learning) techniques for the construction of cheap surrogate models of turbulent subgrid scale stresses. These stresses display complex spatio-temporal structures, and constitute a difficult surrogate target. In this paper we propose a data-preprocessing step, in which we derive alternative subgrid scale models which are virtually exact for a user-specified set of spatially integrated quantities of interest. The unclosed component of these new subgrid scale models is of the same size as this set of integrated quantities of interest. As a result, the corresponding training data is massively reduced in size, decreasing the complexity of the subsequent surrogate construction

    Resampling with neural networks for stochastic parameterization in multiscale systems

    Get PDF
    In simulations of multiscale dynamical systems, not all relevant processes can be resolved explicitly. Taking the effect of the unresolved processes into account is important, which introduces the need for parameterizations. We present a machine-learning method, used for the conditional resampling of observations or reference data from a fully resolved simulation. It is based on the probabilistic classification of subsets of reference data, conditioned on macroscopic variables. This method is used to formulate a parameterization that is stochastic, taking the uncertainty of the unresolved scales into account. We validate our approach on the Lorenz 96 system, using two different parameter settings which are challenging for parameterization methods

    Energy-conserving neural network for turbulence closure modeling

    Get PDF
    In turbulence modeling, and more particularly in the Large-Eddy Simulation (LES) framework, we are concerned with finding closure models that represent the effect of the unresolved subgrid scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the `discretize first, filter next\' approach, in which we apply a spatial averaging filter to existing energy-conserving (fine-grid) discretizations. The main novelty is that we extend the system of equations describing the filtered solution with a set of equations that describe the evolution of (a compressed version of) the energy of the subgrid scales. Having an estimate of this energy, we can use the concept of energy conservation and derive stability. The compressed variables are determined via a data-driven technique in such a way that the energy of the subgrid scales is matched. For the extended system, the closure model should be energy-conserving, and a new skew-symmetric convolutional neural network architecture is proposed that has this property. Stability is thus guaranteed, independent of the actual weights and biases of the network. Importantly, our framework allows energy exchange between resolved scales and compressed subgrid scales and thus enables backscatter. To model dissipative systems (e.g. viscous flows), the framework is extended with a diffusive component. The introduced neural network architecture is constructed such that it also satisfies momentum conservation. We apply the new methodology to both the viscous Burgers\' equation and the Korteweg-De Vries equation in 1D and show superior stability properties when compared to a vanilla convolutional neural network

    Eigenvector perturbation methodology for uncertainty quantification of turbulence models

    Get PDF
    Reynolds-averaged Navier-Stokes (RANS) models are the primary numerical recourse to investigate complex engineering turbulent flows in industrial applications. However, to establish RANS models as reliable design tools, it is essential to provide estimates for the uncertainty in their predictions. In the recent past, an uncertainty estimation framework relying on eigenvalue and eigenvector perturbations to the modeled Reynolds stress tensor has been widely applied with satisfactory results. However, the methodology for the eigenvector perturbations is not well established. Evaluations using only eigenvalue perturbations do not provide comprehensive estimates of model form uncertainty, especially in flows with streamline curvature, recirculation, or flow separation. In this article, we outline a methodology for the eigenvector perturbations using a predictor-corrector approach, which uses the incipient eigenvalue perturbations along with the Reynolds stress transport equations to determine the eigenvector perturbations. This approach was applied to benchmark cases of complex turbulent flows. The uncertainty intervals estimated using the proposed framework exhibited substantial improvement over eigenvalue-only perturbations and are able to account for a significant proportion of the discrepancy between RANS predictions and high-fidelity data

    Ensembles are required to handle aleatoric and parametric uncertainty in molecular dynamics simulation

    Get PDF
    Classical molecular dynamics is a computer simulation technique that is in widespread use across many areas of science, from physics and chemistry to materials, biology, and medicine. The method continues to attract criticism due its oft-reported lac
    corecore