347 research outputs found
Wave radiation in simple geophysical models
Wave radiation is an important process in many geophysical flows. In particular, it is by wave
radiation that flows may adjust to a state for which the dynamics is slow. Such a state is
described as “balanced”, meaning there is an approximate balance between the Coriolis force
and horizontal pressure gradients, and between buoyancy and vertical pressure gradients. In
this thesis, wave radiation processes relevant to these enormously complex flows are studied
through the use of some highly simplified models, and a parallel aim is to develop accurate
numerical techniques for doing so.
This thesis is divided into three main parts.
1. We consider accurate numerical boundary conditions for various equations which support
wave radiation to infinity. Particular attention is given to discretely non-reflecting
boundary conditions, which are derived directly from a discretised scheme. Such a boundary
condition is studied in the case of the 1-d Klein-Gordon equation. The limitations
concerning the practical implementation of this scheme are explored and some possible
improvements are suggested. A stability analysis is developed which yields a simple stability
criterion that is useful when tuning the boundary condition. The practical use of
higher-order boundary conditions for the 2-d shallow water equations is also explored; the
accuracy of such a method is assessed when combined with a particular interior scheme,
and an analysis based on matrix pseudospectra reveals something of the stability of such
a method.
2. Large-scale atmospheric and oceanic flows are examples of systems with a wide timescale
separation, determined by a small parameter. In addition they both undergo constant
random forcing. The five component Lorenz-Krishnamurthy system is a system with a
timescale separation controlled by a small parameter, and we employ it as a model of
the forced ocean by further adding a random forcing of the slow variables, and introduce
wave radiation to infinity by the addition of a dispersive PDE. The dynamics are reduced
by deriving balance relations, and numerical experiments are used to assess the effects of
energy radiation by fast waves.
3. We study quasimodes, which demonstrate the existence of associated Landau poles of a
system. In this thesis, we consider a simple model of wave radiation that exhibits quasimodes,
that allows us to derive some explicit analytical results, as opposed to physically
realistic geophysical fluid systems for which such results are often unavailable, necessitating
recourse to numerical techniques. The growth rates obtained for this system, which
is an extension of one considered by Lamb, are confirmed using numerical experiments
Efficient hyperbolic-parabolic models on multi-dimensional unbounded domains using an extended DG approach
We introduce an extended discontinuous Galerkin discretization of
hyperbolic-parabolic problems on multidimensional semi-infinite domains.
Building on previous work on the one-dimensional case, we split the
strip-shaped computational domain into a bounded region, discretized by means
of discontinuous finite elements using Legendre basis functions, and an
unbounded subdomain, where scaled Laguerre functions are used as a basis.
Numerical fluxes at the interface allow for a seamless coupling of the two
regions. The resulting coupling strategy is shown to produce accurate numerical
solutions in tests on both linear and non-linear scalar and vectorial model
problems. In addition, an efficient absorbing layer can be simulated in the
semi-infinite part of the domain in order to damp outgoing signals with
negligible spurious reflections at the interface. By tuning the scaling
parameter of the Laguerre basis functions, the extended DG scheme simulates
transient dynamics over large spatial scales with a substantial reduction in
computational cost at a given accuracy level compared to standard single-domain
discontinuous finite element techniques.Comment: 28 pages, 13 figure
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
A seamless, extended DG approach for advection-diffusion problems on unbounded domains
We propose and analyze a seamless extended Discontinuous Galerkin (DG)
discretization of advection-diffusion equations on semi-infinite domains. The
semi-infinite half line is split into a finite subdomain where the model uses a
standard polynomial basis, and a semi-unbounded subdomain where scaled Laguerre
functions are employed as basis and test functions. Numerical fluxes enable the
coupling at the interface between the two subdomains in the same way as
standard single domain DG interelement fluxes. A novel linear analysis on the
extended DG model yields unconditional stability with respect to the P\'eclet
number. Errors due to the use of different sets of basis functions on different
portions of the domain are negligible, as highlighted in numerical experiments
with the linear advection-diffusion and viscous Burgers' equations. With an
added damping term on the semi-infinite subdomain, the extended framework is
able to efficiently simulate absorbing boundary conditions without additional
conditions at the interface. A few modes in the semi-infinite subdomain are
found to suffice to deal with outgoing single wave and wave train signals more
accurately than standard approaches at a given computational cost, thus
providing an appealing model for fluid flow simulations in unbounded regions.Comment: 27 pages, 8 figure
Contributions to discrete-time methods for room acoustic simulation
The sound field distribution in a room is the consequence of the acoustic properties of radiating sources and the position, geometry and absorbing characteristics of the surrounding boundaries in an enclosure (boundary conditions). Despite there existing a consolidated acoustic wave theory, it is very difficult, nearly impossible, to find an analytical expression of the sound variables distribution in a real room, as a function of time and position. This scenario represents as an inhomogeneous boundary value problem, where the complexity of source properties and boundary conditions make that problem extremely hard to solve.
Room acoustic simulation, as treated in this thesis, comprises the algebraical approach to solve the wave equation, and the way to define the boundary conditions and source modeling of the scenario under analysis.
Numerical methods provide accurate algorithms for this purpose and among the different possibilities, the use of discrete-time methods arises as a suitable solution for solving those partial differential equations, particularized by some specific constrains. Together with the constant growth of computer power, those methods are increasing their suitability for room acoustic simulation. However, there exists an important lack of accuracy in the definition of some of these conditions so far: current frequency-dependent boundary conditions do not comply with any physical model, and directive sources in discrete-time methods have been hardly treated.
This thesis discusses about the current state-of-the-art of the boundary conditions and source modeling in discrete-time methods for room acoustic simulation, and it contributes some algorithms to enhance boundary condition formulation, in a locally reacting impedance sense, and source modelling in terms of directive sources under a defined radiation pattern. These algorithms have been particularized to some discrete-time methods such as the Finite Difference Time Domain and the Digital Waveguide Mesh.Escolano Carrasco, J. (2008). Contributions to discrete-time methods for room acoustic simulation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8309Palanci
Experiment-Based Validation and Uncertainty Quantification of Partitioned Models: Improving Predictive Capability of Multi-Scale Plasticity Models
Partitioned analysis involves coupling of constituent models that resolve their own scales or physics by exchanging inputs and outputs in an iterative manner. Through partitioning, simulations of complex physical systems are becoming evermore present in scientific modeling, making Verification and Validation of partitioned models for the purpose of quantifying the predictive capability of their simulations increasingly important. Parameterization of the constituent models as well as the coupling interface requires a significant amount of information about the system, which is often imprecisely known. Consequently, uncertainties as well as biases in constituent models and their interface lead to concerns about the accumulation and compensation of these uncertainties and errors during the iterative procedures of partitioned analysis. Furthermore, partitioned analysis relies on the availability of reliable constituent models for each component of a system. When a constituent is unavailable, assumptions must be made to represent the coupling relationship, often through uncertain parameters that are then calibrated.
This dissertation contributes to the field of computational modeling by presenting novel methods that take advantage of the transparency of partitioned analysis to compare constituent models with separate-effect experiments (measurements contained to the constituent domain) and coupled models with integral-effect experiments (measurements capturing behavior of the full system). The methods developed herein focus on these two types of experiments seeking to maximize the information that can be gained from each, thus progressing our capability to assess and improve the predictive capability of partitioned models through inverse analysis. The importance of this study stems from the need to make coupled models available for widespread use for predicting the behavior of complex systems with confidence to support decision-making in high-risk scenarios.
Methods proposed herein address the challenges currently limiting the predictive capability of coupled models through a focused analysis with available experiments. Bias-corrected partitioned analysis takes advantage of separate-effect experiments to reduce parametric uncertainty and quantify systematic bias at the constituent level followed by an integration of bias-correction to the coupling framework, thus ‘correcting’ the constituent model during coupling iterations and preventing the accumulation of errors due to the final predictions. Model bias is the result of assumptions made in the modeling process, often due to lack of understanding of the underlying physics. Such is the case when a constituent model of a system component is entirely unavailable and cannot be developed due to lack of knowledge. However, if this constituent model were to be available and coupled to existing models of the other system components, bias in the coupled system would be reduced. This dissertation proposes a novel statistical inference method for developing empirical constituent models where integral-effect experiments are used to infer relationships missing from system models. Thus, the proposed inverse analysis may be implemented to infer underlying coupled relationships, not only improving the predictive capability of models by producing empirical constituents to allow for coupling, but also advancing our fundamental understanding of dependencies in the coupled system. Throughout this dissertation, the applicability and feasibility of the proposed methods are demonstrated with advanced multi-scale and multi-physics material models simulating complex material behaviors under extreme loading conditions, thus specifically contributing advancements to the material modeling community
The EMCC / DARPA Massively Parallel Electromagnetic Scattering Project
The Electromagnetic Code Consortium (EMCC) was sponsored by the Advanced Research Program Agency (ARPA) to demonstrate the effectiveness of massively parallel computing in large scale radar signature predictions. The EMCC/ARPA project consisted of three parts
Multilevel Delayed Acceptance MCMC with Applications to Hydrogeological Inverse Problems
Quantifying the uncertainty of model predictions is a critical task for engineering decision support systems. This is a particularly challenging effort in the context of statistical inverse problems, where the model parameters are unknown or poorly constrained, and where the data is often scarce. Many such problems emerge in the fields of hydrology and hydro--environmental engineering in general, and in hydrogeology in particular. While methods for rigorously quantifying the uncertainty of such problems exist, they are often prohibitively computationally expensive, particularly when the forward model is high--dimensional and expensive to evaluate. In this thesis, I present a Metropolis--Hastings algorithm, namely the Multilevel Delayed Acceptance (MLDA) algorithm, which exploits a hierarchy of forward models of increasing computational cost to significantly reduce the total cost of quantifying the uncertainty of high--dimensional, expensive forward models. The algorithm is shown to be in detailed balance with the posterior distribution of parameters, and the computational gains of the algorithm is demonstrated on multiple examples. Additionally, I present an approach for exploiting a deep neural network as an ultra--fast model approximation in an MLDA model hierarchy. This method is demonstrated in the context of both 2D and 3D groundwater flow modelling. Finally, I present a novel approach to adaptive optimal design of groundwater surveying, in which MLDA is employed to construct the posterior Monte Carlo estimates. This method utilises the posterior uncertainty of the primary problem in conjunction with the expected solution to an adjoint problem to sequentially determine the optimal location of the next datapoint.Engineering and Physical Sciences Research Council (EPSRC)Alan Turing InstituteEngineering and Physical Sciences Research Council (EPSRC
- …