24,621 research outputs found

    Evaluating Data Assimilation Algorithms

    Get PDF
    Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given the observations, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms. A key aspect of geophysical data assimilation is the high dimensionality and low predictability of the computational model. With this in mind, yet with the goal of allowing an explicit and accurate computation of the posterior distribution, we study the 2D Navier-Stokes equations in a periodic geometry. We compute the posterior probability distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that we evaluate against this accurate gold standard, as quantified by comparing the relative error in reproducing its moments, are 4DVAR and a variety of sequential filtering approximations based on 3DVAR and on extended and ensemble Kalman filters. The primary conclusions are that: (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution; (ii) however they typically perform poorly when attempting to reproduce the covariance; (iii) this poor performance is compounded by the need to modify the covariance, in order to induce stability. Thus, whilst filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model complexity is increased, for example by employing a smaller viscosity, or by using a detailed NWP model

    Approximation of L\"owdin Orthogonalization to a Spectrally Efficient Orthogonal Overlapping PPM Design for UWB Impulse Radio

    Full text link
    In this paper we consider the design of spectrally efficient time-limited pulses for ultrawideband (UWB) systems using an overlapping pulse position modulation scheme. For this we investigate an orthogonalization method, which was developed in 1950 by Per-Olov L\"owdin. Our objective is to obtain a set of N orthogonal (L\"owdin) pulses, which remain time-limited and spectrally efficient for UWB systems, from a set of N equidistant translates of a time-limited optimal spectral designed UWB pulse. We derive an approximate L\"owdin orthogonalization (ALO) by using circulant approximations for the Gram matrix to obtain a practical filter implementation. We show that the centered ALO and L\"owdin pulses converge pointwise to the same Nyquist pulse as N tends to infinity. The set of translates of the Nyquist pulse forms an orthonormal basis or the shift-invariant space generated by the initial spectral optimal pulse. The ALO transform provides a closed-form approximation of the L\"owdin transform, which can be implemented in an analog fashion without the need of analog to digital conversions. Furthermore, we investigate the interplay between the optimization and the orthogonalization procedure by using methods from the theory of shift-invariant spaces. Finally we develop a connection between our results and wavelet and frame theory.Comment: 33 pages, 11 figures. Accepted for publication 9 Sep 201

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    Pinsker estimators for local helioseismology

    Full text link
    A major goal of helioseismology is the three-dimensional reconstruction of the three velocity components of convective flows in the solar interior from sets of wave travel-time measurements. For small amplitude flows, the forward problem is described in good approximation by a large system of convolution equations. The input observations are highly noisy random vectors with a known dense covariance matrix. This leads to a large statistical linear inverse problem. Whereas for deterministic linear inverse problems several computationally efficient minimax optimal regularization methods exist, only one minimax-optimal linear estimator exists for statistical linear inverse problems: the Pinsker estimator. However, it is often computationally inefficient because it requires a singular value decomposition of the forward operator or it is not applicable because of an unknown noise covariance matrix, so it is rarely used for real-world problems. These limitations do not apply in helioseismology. We present a simplified proof of the optimality properties of the Pinsker estimator and show that it yields significantly better reconstructions than traditional inversion methods used in helioseismology, i.e.\ Regularized Least Squares (Tikhonov regularization) and SOLA (approximate inverse) methods. Moreover, we discuss the incorporation of the mass conservation constraint in the Pinsker scheme using staggered grids. With this improvement we can reconstruct not only horizontal, but also vertical velocity components that are much smaller in amplitude
    • …
    corecore