3,283 research outputs found

    Approximation of Bayesian inverse problems for PDEs

    Get PDF
    Inverse problems are often ill posed, with solutions that depend sensitively on data. In any numerical approach to the solution of such problems, regularization of some form is needed to counteract the resulting instability. This paper is based on an approach to regularization, employing a Bayesian formulation of the problem, which leads to a notion of well posedness for inverse problems, at the level of probability measures. The stability which results from this well posedness may be used as the basis for quantifying the approximation, in finite dimensional spaces, of inverse problems for functions. This paper contains a theory which utilizes this stability property to estimate the distance between the true and approximate posterior distributions, in the Hellinger metric, in terms of error estimates for approximation of the underlying forward problem. This is potentially useful as it allows for the transfer of estimates from the numerical analysis of forward problems into estimates for the solution of the related inverse problem. It is noteworthy that, when the prior is a Gaussian random field model, controlling differences in the Hellinger metric leads to control on the differences between expected values of polynomially bounded functions and operators, including the mean and covariance operator. The ideas are applied to some non-Gaussian inverse problems where the goal is determination of the initial condition for the Stokes or Navier–Stokes equation from Lagrangian and Eulerian observations, respectively

    Variational data assimilation using targetted random walks

    Get PDF
    The variational approach to data assimilation is a widely used methodology for both online prediction and for reanalysis (offline hindcasting). In either of these scenarios it can be important to assess uncertainties in the assimilated state. Ideally it would be desirable to have complete information concerning the Bayesian posterior distribution for unknown state, given data. The purpose of this paper is to show that complete computational probing of this posterior distribution is now within reach in the offline situation. In this paper we will introduce an MCMC method which enables us to directly sample from the Bayesian\ud posterior distribution on the unknown functions of interest, given observations. Since we are aware that these\ud methods are currently too computationally expensive to consider using in an online filtering scenario, we frame this in the context of offline reanalysis. Using a simple random walk-type MCMC method, we are able to characterize the posterior distribution using only evaluations of the forward model of the problem, and of the model and data mismatch. No adjoint model is required for the method we use; however more sophisticated MCMC methods are available\ud which do exploit derivative information. For simplicity of exposition we consider the problem of assimilating data, either Eulerian or Lagrangian, into a low Reynolds number (Stokes flow) scenario in a two dimensional periodic geometry. We will show that in many cases it is possible to recover the initial condition and model error (which we describe as unknown forcing to the model) from data, and that with increasing amounts of informative data, the uncertainty in our estimations reduces

    MCMC methods for functions modifying old algorithms to make\ud them faster

    Get PDF
    Many problems arising in applications result in the need\ud to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems

    A stochastic model for early placental development

    Get PDF
    In the human, placental structure is closely related to placental function and consequent pregnancy outcome. Studies have noted abnormal placental shape in small-for-gestational age infants which extends to increased lifetime risk of cardiovascular disease. The origins and determinants of placental shape are incompletely under-stood and are difficult to study in vivo. In this paper we model the early development of the placenta in the human, based on the hypothesis that this is driven by dynamics dominated by a chemo-attractant effect emanating from proximal spiral arteries in the decidua. We derive and explore a two-dimensional stochastic model for these events, and investigate the effects of loss of spiral arteries in regions near to the cord insertion on the shape of the placenta. This model demonstrates that placental shape is highly variable and disruption of spiral arteries can exert profound effects on placental shape, particularly if this disruption is close to the cord insertion. Thus, placental shape reflects the underlying maternal vascular bed. Abnormal placental shape may reflect an abnormal uterine environment, which predisposes to pregnancy complications

    Adaptive finite element method assisted by stochastic simulation of chemical systems

    Get PDF
    Stochastic models of chemical systems are often analysed by solving the corresponding\ud Fokker-Planck equation which is a drift-diffusion partial differential equation for the probability\ud distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with non-negligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the probability density

    A Complete Sample of Megaparsec Size Double Radio Sources from SUMSS

    Get PDF
    We present a complete sample of megaparsec-size double radio sources compiled from the Sydney University Molonglo Sky Survey (SUMSS). Almost complete redshift information has been obtained for the sample. The sample has the following defining criteria: Galactic latitude |b| > 12.5 deg, declination < -50 deg and angular size > 5 arcmin. All the sources have projected linear size larger than 0.7 Mpc (assuming H_o = 71 km/s/Mpc). The sample is chosen from a region of the sky covering 2100 square degrees. In this paper, we present 843-MHz radio images of the extended radio morphologies made using the Molonglo Observatory Synthesis Telescope (MOST), higher resolution radio observations of any compact radio structures using the Australia Telescope Compact Array (ATCA), and low resolution optical spectra of the host galaxies from the 2.3-m Australian National University (ANU) telescope at Siding Spring Observatory. The sample presented here is the first in the southern hemisphere and significantly enhances the database of known giant radio sources. The giant radio sources with linear size exceeding 0.7 Mpc have an abundance of (215 Mpc)^(-3) at the sensitivity of the survey. In the low redshift universe, the survey may be suggesting the possibility that giant radio sources with relict lobes are more numerous than giant sources in which beams from the centre currently energize the lobes.Comment: 67 pages, 29 figures, for full resolution figures see http://www.astrop.physics.usyd.edu.au/SUMSS/PAPERS/Submit-May11-ms.pd

    Continuous and discrete Clebsch variational principles

    Full text link
    The Clebsch method provides a unifying approach for deriving variational principles for continuous and discrete dynamical systems where elements of a vector space are used to control dynamics on the cotangent bundle of a Lie group \emph{via} a velocity map. This paper proves a reduction theorem which states that the canonical variables on the Lie group can be eliminated, if and only if the velocity map is a Lie algebra action, thereby producing the Euler-Poincar\'e (EP) equation for the vector space variables. In this case, the map from the canonical variables on the Lie group to the vector space is the standard momentum map defined using the diamond operator. We apply the Clebsch method in examples of the rotating rigid body and the incompressible Euler equations. Along the way, we explain how singular solutions of the EP equation for the diffeomorphism group (EPDiff) arise as momentum maps in the Clebsch approach. In the case of finite dimensional Lie groups, the Clebsch variational principle is discretised to produce a variational integrator for the dynamical system. We obtain a discrete map from which the variables on the cotangent bundle of a Lie group may be eliminated to produce a discrete EP equation for elements of the vector space. We give an integrator for the rotating rigid body as an example. We also briefly discuss how to discretise infinite-dimensional Clebsch systems, so as to produce conservative numerical methods for fluid dynamics
    corecore