4,009 research outputs found

    Strong convergence rates of probabilistic integrators for ordinary differential equations

    Get PDF
    Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al.\ (\textit{Stat.\ Comput.}, 2017), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.Comment: 25 page

    Analysis of the 3DVAR Filter for the Partially Observed Lorenz '63 Model

    Full text link
    The problem of effectively combining data with a mathematical model constitutes a major challenge in applied mathematics. It is particular challenging for high-dimensional dynamical systems where data is received sequentially in time and the objective is to estimate the system state in an on-line fashion; this situation arises, for example, in weather forecasting. The sequential particle filter is then impractical and ad hoc filters, which employ some form of Gaussian approximation, are widely used. Prototypical of these ad hoc filters is the 3DVAR method. The goal of this paper is to analyze the 3DVAR method, using the Lorenz '63 model to exemplify the key ideas. The situation where the data is partial and noisy is studied, and both discrete time and continuous time data streams are considered. The theory demonstrates how the widely used technique of variance inflation acts to stabilize the filter, and hence leads to asymptotic accuracy

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Reachability in Biochemical Dynamical Systems by Quantitative Discrete Approximation (extended abstract)

    Full text link
    In this paper, a novel computational technique for finite discrete approximation of continuous dynamical systems suitable for a significant class of biochemical dynamical systems is introduced. The method is parameterized in order to affect the imposed level of approximation provided that with increasing parameter value the approximation converges to the original continuous system. By employing this approximation technique, we present algorithms solving the reachability problem for biochemical dynamical systems. The presented method and algorithms are evaluated on several exemplary biological models and on a real case study.Comment: In Proceedings CompMod 2011, arXiv:1109.104

    Adaptive finite element method assisted by stochastic simulation of chemical systems

    Get PDF
    Stochastic models of chemical systems are often analysed by solving the corresponding\ud Fokker-Planck equation which is a drift-diffusion partial differential equation for the probability\ud distribution function. Efficient numerical solution of the Fokker-Planck equation requires adaptive mesh refinements. In this paper, we present a mesh refinement approach which makes use of a stochastic simulation of the underlying chemical system. By observing the stochastic trajectory for a relatively short amount of time, the areas of the state space with non-negligible probability density are identified. By refining the finite element mesh in these areas, and coarsening elsewhere, a suitable mesh is constructed and used for the computation of the probability density

    Final Report of the DAUFIN project

    Get PDF
    DAUFIN = Data Assimulation within Unifying Framework for Improved river basiN modeling (EC 5th framework Project

    On the use of simple dynamical systems for climate predictions: A Bayesian prediction of the next glacial inception

    Full text link
    Over the last few decades, climate scientists have devoted much effort to the development of large numerical models of the atmosphere and the ocean. While there is no question that such models provide important and useful information on complicated aspects of atmosphere and ocean dynamics, skillful prediction also requires a phenomenological approach, particularly for very slow processes, such as glacial-interglacial cycles. Phenomenological models are often represented as low-order dynamical systems. These are tractable, and a rich source of insights about climate dynamics, but they also ignore large bodies of information on the climate system, and their parameters are generally not operationally defined. Consequently, if they are to be used to predict actual climate system behaviour, then we must take very careful account of the uncertainty introduced by their limitations. In this paper we consider the problem of the timing of the next glacial inception, about which there is on-going debate. Our model is the three-dimensional stochastic system of Saltzman and Maasch (1991), and our inference takes place within a Bayesian framework that allows both for the limitations of the model as a description of the propagation of the climate state vector, and for parametric uncertainty. Our inference takes the form of a data assimilation with unknown static parameters, which we perform with a variant on a Sequential Monte Carlo technique (`particle filter'). Provisional results indicate peak glacial conditions in 60,000 years.Comment: superseeds the arXiv:0809.0632 (which was published in European Reviews). The Bayesian section has been significantly expanded. The present version has gone scientific peer review and has been published in European Physics Special Topics. (typo in DOI and in Table 1 (psi -> theta) corrected on 25th August 2009
    • …
    corecore