22,457 research outputs found

    Importance Sampling: Intrinsic Dimension and Computational Cost

    Get PDF
    The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for the practitioner. The subject has attracted substantial interest from within a variety of communities. The objective of this paper is to overview and unify the resulting literature by creating an overarching framework. A general theory is presented, with a focus on the use of importance sampling in Bayesian inverse problems and filtering.Comment: Statistical Scienc

    Well-Posedness And Accuracy Of The Ensemble Kalman Filter In Discrete And Continuous Time

    Get PDF
    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so in the small ensemble size limit. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with resepct to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz '63 and '96 models, together with the incompressible Navier-Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier-Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise

    MCMC methods for functions modifying old algorithms to make\ud them faster

    Get PDF
    Many problems arising in applications result in the need\ud to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems

    Overcoming the false-minima problem in direct methods: Structure determination of the packaging enzyme P4 from bacteriophage φ13

    Get PDF
    The problems encountered during the phasing and structure determination of the packaging enzyme P4 from bacteriophage φ13 using the anomalous signal from selenium in a single-wavelength anomalous dispersion experiment (SAD) are described. The oligomeric state of P4 in the virus is a hexamer (with sixfold rotational symmetry) and it crystallizes in space group C2, with four hexamers in the crystallographic asymmetric unit. Current state-of-the-art ab initio phasing software yielded solutions consisting of 96 atoms arranged as sixfold symmetric clusters of Se atoms. However, although these solutions showed high correlation coefficients indicative that the substructure had been solved, the resulting phases produced uninterpretable electron-density maps. Only after further analysis were correct solutions found (also of 96 atoms), leading to the eventual identification of the positions of 120 Se atoms. Here, it is demonstrated how the difficulties in finding a correct phase solution arise from an intricate false-minima problem. © 2005 International Union of Crystallography - all rights reserved

    Some aspects of the thermal degradation of epoxide resins. Part 1

    Get PDF
    This Note contains a review of previous work in the field of pyrolytic degradation of epoxide resins, and a description of the development of an instrument for this purpose, using the principle of gas chromatography. The method depends on the pyrolysis of the material using an electrically heated filament, the difficulties of this method are critically examined, and attempts to overcome them described. The pyrolytic degradation in a nitrogen atmosphere, of unhardened epoxide resin was investigated, likewise the degradation of resin hardened with 1:2 diamino ethane and triethylenetetramine, is described. An attempt has been made to explain, in terms of possible degradation reactions, the actual compounds detected in the pyrolytic break-down

    Stability of Filters for the Navier-Stokes Equation

    Get PDF
    Data assimilation methodologies are designed to incorporate noisy observations of a physical system into an underlying model in order to infer the properties of the state of the system. Filters refer to a class of data assimilation algorithms designed to update the estimation of the state in a on-line fashion, as data is acquired sequentially. For linear problems subject to Gaussian noise filtering can be performed exactly using the Kalman filter. For nonlinear systems it can be approximated in a systematic way by particle filters. However in high dimensions these particle filtering methods can break down. Hence, for the large nonlinear systems arising in applications such as weather forecasting, various ad hoc filters are used, mostly based on making Gaussian approximations. The purpose of this work is to study the properties of these ad hoc filters, working in the context of the 2D incompressible Navier-Stokes equation. By working in this infinite dimensional setting we provide an analysis which is useful for understanding high dimensional filtering, and is robust to mesh-refinement. We describe theoretical results showing that, in the small observational noise limit, the filters can be tuned to accurately track the signal itself (filter stability), provided the system is observed in a sufficiently large low dimensional space; roughly speaking this space should be large enough to contain the unstable modes of the linearized dynamics. Numerical results are given which illustrate the theory. In a simplified scenario we also derive, and study numerically, a stochastic PDE which determines filter stability in the limit of frequent observations, subject to large observational noise. The positive results herein concerning filter stability complement recent numerical studies which demonstrate that the ad hoc filters perform poorly in reproducing statistical variation about the true signal

    On the terminal velocity of sedimenting particles in a flowing fluid

    Full text link
    The influence of an underlying carrier flow on the terminal velocity of sedimenting particles is investigated both analytically and numerically. Our theoretical framework works for a general class of (laminar or turbulent) velocity fields and, by means of an ordinary perturbation expansion at small Stokes number, leads to closed partial differential equations (PDE) whose solutions contain all relevant information on the sedimentation process. The set of PDE's are solved by means of direct numerical simulations for a class of 2D cellular flows (static and time dependent) and the resulting phenomenology is analysed and discussed.Comment: 13 pages, 2 figures, submitted to JP
    • …
    corecore