40,748 research outputs found

    Solving Dynamic Discrete Choice Models Using Smoothing and Sieve Methods

    Get PDF
    We propose to combine smoothing, simulations and sieve approximations to solve for either the integrated or expected value function in a general class of dynamic discrete choice (DDC) models. We use importance sampling to approximate the Bellman operators defining the two functions. The random Bellman operators, and therefore also the corresponding solutions, are generally non-smooth which is undesirable. To circumvent this issue, we introduce a smoothed version of the random Bellman operator and solve for the corresponding smoothed value function using sieve methods. We show that one can avoid using sieves by generalizing and adapting the `self-approximating' method of Rust (1997) to our setting. We provide an asymptotic theory for the approximate solutions and show that they converge with root-N-rate, where NN is number of Monte Carlo draws, towards Gaussian processes. We examine their performance in practice through a set of numerical experiments and find that both methods perform well with the sieve method being particularly attractive in terms of computational speed and accuracy

    Data Assimilation: A Mathematical Introduction

    Full text link
    These notes provide a systematic mathematical treatment of the subject of data assimilation

    Evaluating Data Assimilation Algorithms

    Get PDF
    Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given the observations, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms. A key aspect of geophysical data assimilation is the high dimensionality and low predictability of the computational model. With this in mind, yet with the goal of allowing an explicit and accurate computation of the posterior distribution, we study the 2D Navier-Stokes equations in a periodic geometry. We compute the posterior probability distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that we evaluate against this accurate gold standard, as quantified by comparing the relative error in reproducing its moments, are 4DVAR and a variety of sequential filtering approximations based on 3DVAR and on extended and ensemble Kalman filters. The primary conclusions are that: (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution; (ii) however they typically perform poorly when attempting to reproduce the covariance; (iii) this poor performance is compounded by the need to modify the covariance, in order to induce stability. Thus, whilst filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model complexity is increased, for example by employing a smaller viscosity, or by using a detailed NWP model

    Using the generalized interpolation material point method for fluid-solid interactions induced by surface tension

    Get PDF
    This thesis is devoted to the development of new, Generalized Interpolation Material Point Method (GIMP)-based algorithms for handling surface tension and contact (wetting) in fluid-solid interaction (FSI) problems at small scales. In these problems, surface tension becomes so dominant that its influence on both fluids and solids must be considered. Since analytical solutions for most engineering problems are usually unavailable, numerical methods are needed to describe and predict complicated time-dependent states in the solid and fluid involved due to surface tension effects. Traditional computational methods for handling fluid-solid interactions may not be effective due to their weakness in solving large-deformation problems and the complicated coupling of two different types of computational frameworks: one for solid, and the other for fluid. On the contrary, GIMP, a mesh-free algorithm for solid mechanics problems, is numerically effective in handling problems involving large deformations and fracture. Here we extend the capability of GIMP to handle fluid dynamics problems with surface tension, and to develop a new contact algorithm to deal with the wetting boundary conditions that include the modeling of contact angle and slip near the triple points where the three phases -- fluid, solid, and vapor -- meet. The error of the new GIMP algorithm for FSI problems at small scales, as verified by various benchmark problems, generally falls within the 5% range. In this thesis, we have successfully extended the capability of GIMP for handling FSI problems under surface tension in a one-solver numerical framework, a unique and innovative approach.Chapter 1. Introduction -- Chapter 2. Using the generalized interpolation material point method for fluid dynamics at low reynolds numbers -- Chapter 3. On the modeling of surface tension and its applications by the generalized interpolation material point method -- Chapter 4. Using the generalized interpolation material point method for fluid-solid interactions induced by surface tension -- Chapter 5. Conclusions

    Multi-patch discontinuous Galerkin isogeometric analysis for wave propagation: explicit time-stepping and efficient mass matrix inversion

    Full text link
    We present a class of spline finite element methods for time-domain wave propagation which are particularly amenable to explicit time-stepping. The proposed methods utilize a discontinuous Galerkin discretization to enforce continuity of the solution field across geometric patches in a multi-patch setting, which yields a mass matrix with convenient block diagonal structure. Over each patch, we show how to accurately and efficiently invert mass matrices in the presence of curved geometries by using a weight-adjusted approximation of the mass matrix inverse. This approximation restores a tensor product structure while retaining provable high order accuracy and semi-discrete energy stability. We also estimate the maximum stable timestep for spline-based finite elements and show that the use of spline spaces result in less stringent CFL restrictions than equivalent piecewise continuous or discontinuous finite element spaces. Finally, we explore the use of optimal knot vectors based on L2 n-widths. We show how the use of optimal knot vectors can improve both approximation properties and the maximum stable timestep, and present a simple heuristic method for approximating optimal knot positions. Numerical experiments confirm the accuracy and stability of the proposed methods

    Fast Ensemble Smoothing

    Full text link
    Smoothing is essential to many oceanographic, meteorological and hydrological applications. The interval smoothing problem updates all desired states within a time interval using all available observations. The fixed-lag smoothing problem updates only a fixed number of states prior to the observation at current time. The fixed-lag smoothing problem is, in general, thought to be computationally faster than a fixed-interval smoother, and can be an appropriate approximation for long interval-smoothing problems. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm produces a linear time solution to the interval smoothing problem with a fixed factor, and the second one produces a fixed-lag solution that is independent of the lag length. Identical-twin experiments conducted with the Lorenz-95 model show that for lag lengths approximately equal to the error doubling time, or for long intervals the proposed methods can provide significant computational savings. These results suggest that ensemble methods yield both fixed-interval and fixed-lag smoothing solutions that cost little additional effort over filtering and model propagation, in the sense that in practical ensemble application the additional increment is a small fraction of either filtering or model propagation costs. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing and may be advantageous when memory is not an issue

    Data Assimilation by Conditioning on Future Observations

    Full text link
    Conventional recursive filtering approaches, designed for quantifying the state of an evolving uncertain dynamical system with intermittent observations, use a sequence of (i) an uncertainty propagation step followed by (ii) a step where the associated data is assimilated using Bayes' rule. In this paper we switch the order of the steps to: (i) one step ahead data assimilation followed by (ii) uncertainty propagation. This route leads to a class of filtering algorithms named \emph{smoothing filters}. For a system driven by random noise, our proposed methods require the probability distribution of the driving noise after the assimilation to be biased by a nonzero mean. The system noise, conditioned on future observations, in turn pushes forward the filtering solution in time closer to the true state and indeed helps to find a more accurate approximate solution for the state estimation problem

    Reconstruction of cosmological initial conditions from galaxy redshift catalogues

    Full text link
    We present and test a new method for the reconstruction of cosmological initial conditions from a full-sky galaxy catalogue. This method, called ZTRACE, is based on a self-consistent solution of the growing mode of gravitational instabilities according to the Zel'dovich approximation and higher order in Lagrangian perturbation theory. Given the evolved redshift-space density field, smoothed on some scale, ZTRACE finds via an iterative procedure, an approximation to the initial density field for any given set of cosmological parameters; real-space densities and peculiar velocities are also reconstructed. The method is tested by applying it to N-body simulations of an Einstein-de Sitter and an open cold dark matter universe. It is shown that errors in the estimate of the density contrast dominate the noise of the reconstruction. As a consequence, the reconstruction of real space density and peculiar velocity fields using non-linear algorithms is little improved over those based on linear theory. The use of a mass-preserving adaptive smoothing, equivalent to a smoothing in Lagrangian space, allows an unbiased (although noisy) reconstruction of initial conditions, as long as the (linearly extrapolated) density contrast does not exceed unity. The probability distribution function of the initial conditions is recovered to high precision, even for Gaussian smoothing scales of ~ 5 Mpc/h, except for the tail at delta >~ 1. This result is insensitive to the assumptions of the background cosmology.Comment: 19 pages, MN style, 12 figures included, revised version. MNRAS, in pres
    • …
    corecore