6,648 research outputs found

    Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error

    Get PDF
    It is well understood that dynamic instability is among the primary drivers of forecast uncertainty in chaotic, physical systems. Data assimilation techniques have been designed to exploit this phenomenon, reducing the effective dimension of the data assimilation problem to the directions of rapidly growing errors. Recent mathematical work has, moreover, provided formal proofs of the central hypothesis of the assimilation in the unstable subspace methodology of Anna Trevisan and her collaborators: for filters and smoothers in perfect, linear, Gaussian models, the distribution of forecast errors asymptotically conforms to the unstable-neutral subspace. Specifically, the column span of the forecast and posterior error covariances asymptotically align with the span of backward Lyapunov vectors with nonnegative exponents. Earlier mathematical studies have focused on perfect models, and this current work now explores the relationship between dynamical instability, the precision of observations, and the evolution of forecast error in linear models with additive model error. We prove bounds for the asymptotic uncertainty, explicitly relating the rate of dynamical expansion, model precision, and observational accuracy. Formalizing this relationship, we provide a novel, necessary criterion for the boundedness of forecast errors. Furthermore, we numerically explore the relationship between observational design, dynamical instability, and filter boundedness. Additionally, we include a detailed introduction to the multiplicative ergodic theorem and to the theory and construction of Lyapunov vectors. While forecast error in the stable subspace may not generically vanish, we show that even without filtering, uncertainty remains uniformly bounded due its dynamical dissipation. However, the continuous reinjection of uncertainty from model errors may be excited by transient instabilities in the stable modes of high variance, rendering forecast uncertainty impractically large. In the context of ensemble data assimilation, this requires rectifying the rank of the ensemble-based gain to account for the growth of uncertainty beyond the unstable and neutral subspace, additionally correcting stable modes with frequent occurrences of positive local Lyapunov exponents that excite model errors

    Stability of Filters for the Navier-Stokes Equation

    Get PDF
    Data assimilation methodologies are designed to incorporate noisy observations of a physical system into an underlying model in order to infer the properties of the state of the system. Filters refer to a class of data assimilation algorithms designed to update the estimation of the state in a on-line fashion, as data is acquired sequentially. For linear problems subject to Gaussian noise filtering can be performed exactly using the Kalman filter. For nonlinear systems it can be approximated in a systematic way by particle filters. However in high dimensions these particle filtering methods can break down. Hence, for the large nonlinear systems arising in applications such as weather forecasting, various ad hoc filters are used, mostly based on making Gaussian approximations. The purpose of this work is to study the properties of these ad hoc filters, working in the context of the 2D incompressible Navier-Stokes equation. By working in this infinite dimensional setting we provide an analysis which is useful for understanding high dimensional filtering, and is robust to mesh-refinement. We describe theoretical results showing that, in the small observational noise limit, the filters can be tuned to accurately track the signal itself (filter stability), provided the system is observed in a sufficiently large low dimensional space; roughly speaking this space should be large enough to contain the unstable modes of the linearized dynamics. Numerical results are given which illustrate the theory. In a simplified scenario we also derive, and study numerically, a stochastic PDE which determines filter stability in the limit of frequent observations, subject to large observational noise. The positive results herein concerning filter stability complement recent numerical studies which demonstrate that the ad hoc filters perform poorly in reproducing statistical variation about the true signal

    Distributed Bayesian Filtering using Logarithmic Opinion Pool for Dynamic Sensor Networks

    Get PDF
    The discrete-time Distributed Bayesian Filtering (DBF) algorithm is presented for the problem of tracking a target dynamic model using a time-varying network of heterogeneous sensing agents. In the DBF algorithm, the sensing agents combine their normalized likelihood functions in a distributed manner using the logarithmic opinion pool and the dynamic average consensus algorithm. We show that each agent's estimated likelihood function globally exponentially converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. We rigorously characterize the convergence, stability, and robustness properties of the DBF algorithm. Moreover, we provide an explicit bound on the time step size of the DBF algorithm that depends on the time-scale of the target dynamics, the desired convergence error bound, and the modeling and communication error bounds. Furthermore, the DBF algorithm for linear-Gaussian models is cast into a modified form of the Kalman information filter. The performance and robust properties of the DBF algorithm are validated using numerical simulations

    Degenerate Kalman filter error covariances and their convergence onto the unstable subspace

    Get PDF
    The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters. In particular, as emphasized in the seminal work of Anna Trevisan and coauthors, the error covariance matrix is asymptotically supported by the unstable-neutral subspace only, i.e., it is spanned by the backward Lyapunov vectors with nonnegative exponents. This behavior is at the core of algorithms known as assimilation in the unstable subspace, although a formal proof was still missing. This paper provides the analytical proof of the convergence of the Kalman filter covariance matrix onto the unstable-neutral subspace when the dynamics and the observation operator are linear and when the dynamical model is error free, for any, possibly rank-deficient, initial error covariance matrix. The rate of convergence is provided as well. The derivation is based on an expression that explicitly relates the error covariances at an arbitrary time to the initial ones. It is also shown that if the unstable and neutral directions of the model are sufficiently observed and if the column space of the initial covariance matrix has a nonzero projection onto all of the forward Lyapunov vectors associated with the unstable and neutral directions of the dynamics, the covariance matrix of the Kalman filter collapses onto an asymptotic sequence which is independent of the initial covariances. Numerical results are also shown to illustrate and support the theoretical findings

    A wildland fire model with data assimilation

    Full text link
    A wildfire model is formulated based on balance equations for energy and fuel, where the fuel loss due to combustion corresponds to the fuel reaction rate. The resulting coupled partial differential equations have coefficients that can be approximated from prior measurements of wildfires. An ensemble Kalman filter technique with regularization is then used to assimilate temperatures measured at selected points into running wildfire simulations. The assimilation technique is able to modify the simulations to track the measurements correctly even if the simulations were started with an erroneous ignition location that is quite far away from the correct one.Comment: 35 pages, 12 figures; minor revision January 2008. Original version available from http://www-math.cudenver.edu/ccm/report
    • …
    corecore