16,478 research outputs found

    Fast Neural Network Predictions from Constrained Aerodynamics Datasets

    Full text link
    Incorporating computational fluid dynamics in the design process of jets, spacecraft, or gas turbine engines is often challenged by the required computational resources and simulation time, which depend on the chosen physics-based computational models and grid resolutions. An ongoing problem in the field is how to simulate these systems faster but with sufficient accuracy. While many approaches involve simplified models of the underlying physics, others are model-free and make predictions based only on existing simulation data. We present a novel model-free approach in which we reformulate the simulation problem to effectively increase the size of constrained pre-computed datasets and introduce a novel neural network architecture (called a cluster network) with an inductive bias well-suited to highly nonlinear computational fluid dynamics solutions. Compared to the state-of-the-art in model-based approximations, we show that our approach is nearly as accurate, an order of magnitude faster, and easier to apply. Furthermore, we show that our method outperforms other model-free approaches

    Uncertainty quantification for kinetic models in socio-economic and life sciences

    Full text link
    Kinetic equations play a major rule in modeling large systems of interacting particles. Recently the legacy of classical kinetic theory found novel applications in socio-economic and life sciences, where processes characterized by large groups of agents exhibit spontaneous emergence of social structures. Well-known examples are the formation of clusters in opinion dynamics, the appearance of inequalities in wealth distributions, flocking and milling behaviors in swarming models, synchronization phenomena in biological systems and lane formation in pedestrian traffic. The construction of kinetic models describing the above processes, however, has to face the difficulty of the lack of fundamental principles since physical forces are replaced by empirical social forces. These empirical forces are typically constructed with the aim to reproduce qualitatively the observed system behaviors, like the emergence of social structures, and are at best known in terms of statistical information of the modeling parameters. For this reason the presence of random inputs characterizing the parameters uncertainty should be considered as an essential feature in the modeling process. In this survey we introduce several examples of such kinetic models, that are mathematically described by nonlinear Vlasov and Fokker--Planck equations, and present different numerical approaches for uncertainty quantification which preserve the main features of the kinetic solution.Comment: To appear in "Uncertainty Quantification for Hyperbolic and Kinetic Equations

    Boosting Bayesian Parameter Inference of Nonlinear Stochastic Differential Equation Models by Hamiltonian Scale Separation

    Full text link
    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model, for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact and very efficient approach for generating posterior parameter distributions, for stochastic differential equation models calibrated to measured time-series. The algorithm is inspired by re-interpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for 1D problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.Comment: 15 pages, 8 figure
    • …
    corecore