9,281 research outputs found

    A 4D-Var Method with Flow-Dependent Background Covariances for the Shallow-Water Equations

    Get PDF
    The 4D-Var method for filtering partially observed nonlinear chaotic dynamical systems consists of finding the maximum a-posteriori (MAP) estimator of the initial condition of the system given observations over a time window, and propagating it forward to the current time via the model dynamics. This method forms the basis of most currently operational weather forecasting systems. In practice the optimization becomes infeasible if the time window is too long due to the non-convexity of the cost function, the effect of model errors, and the limited precision of the ODE solvers. Hence the window has to be kept sufficiently short, and the observations in the previous windows can be taken into account via a Gaussian background (prior) distribution. The choice of the background covariance matrix is an important question that has received much attention in the literature. In this paper, we define the background covariances in a principled manner, based on observations in the previous bb assimilation windows, for a parameter b≥1b\ge 1. The method is at most bb times more computationally expensive than using fixed background covariances, requires little tuning, and greatly improves the accuracy of 4D-Var. As a concrete example, we focus on the shallow-water equations. The proposed method is compared against state-of-the-art approaches in data assimilation and is shown to perform favourably on simulated data. We also illustrate our approach on data from the recent tsunami of 2011 in Fukushima, Japan.Comment: 32 pages, 5 figure

    Locating and quantifying gas emission sources using remotely obtained concentration data

    Full text link
    We describe a method for detecting, locating and quantifying sources of gas emissions to the atmosphere using remotely obtained gas concentration data; the method is applicable to gases of environmental concern. We demonstrate its performance using methane data collected from aircraft. Atmospheric point concentration measurements are modelled as the sum of a spatially and temporally smooth atmospheric background concentration, augmented by concentrations due to local sources. We model source emission rates with a Gaussian mixture model and use a Markov random field to represent the atmospheric background concentration component of the measurements. A Gaussian plume atmospheric eddy dispersion model represents gas dispersion between sources and measurement locations. Initial point estimates of background concentrations and source emission rates are obtained using mixed L2-L1 optimisation over a discretised grid of potential source locations. Subsequent reversible jump Markov chain Monte Carlo inference provides estimated values and uncertainties for the number, emission rates and locations of sources unconstrained by a grid. Source area, atmospheric background concentrations and other model parameters are also estimated. We investigate the performance of the approach first using a synthetic problem, then apply the method to real data collected from an aircraft flying over: a 1600 km^2 area containing two landfills, then a 225 km^2 area containing a gas flare stack

    Efficient and Stable Acoustic Tomography Using Sparse Reconstruction Methods

    Get PDF
    We study an acoustic tomography problem and propose a new inversion technique based on sparsity. Acoustic tomography observes the parameters of the medium that influence the speed of sound propagation. In the human body, the parameters that mostly influence the sound speed are temperature and density, in the ocean - temperature and current, in the atmosphere - temperature and wind. In this study, we focus on estimating temperature in the atmosphere using the information on the average sound speed along the propagation path. The latter is practically obtained from travel time measurements. We propose a reconstruction algorithm that exploits the concept of sparsity. Namely, the temperature is assumed to be a linear combination of some functions (e.g. bases or set of different bases) where many of the coefficients are known to be zero. The goal is to find the non-zero coefficients. To this end, we apply an algorithm based on linear programming that under some constrains finds the solution with minimum l0 norm. This is actually equivalent to the fact that many of the unknown coefficients are zeros. Finally, we perform numerical simulations to assess the effectiveness of our approach. The simulation results confirm the applicability of the method and demonstrate high reconstruction quality and robustness to noise

    ADAM: a general method for using various data types in asteroid reconstruction

    Get PDF
    We introduce ADAM, the All-Data Asteroid Modelling algorithm. ADAM is simple and universal since it handles all disk-resolved data types (adaptive optics or other images, interferometry, and range-Doppler radar data) in a uniform manner via the 2D Fourier transform, enabling fast convergence in model optimization. The resolved data can be combined with disk-integrated data (photometry). In the reconstruction process, the difference between each data type is only a few code lines defining the particular generalized projection from 3D onto a 2D image plane. Occultation timings can be included as sparse silhouettes, and thermal infrared data are efficiently handled with an approximate algorithm that is sufficient in practice due to the dominance of the high-contrast (boundary) pixels over the low-contrast (interior) ones. This is of particular importance to the raw ALMA data that can be directly handled by ADAM without having to construct the standard image. We study the reliability of the inversion by using the independent shape supports of function series and control-point surfaces. When other data are lacking, one can carry out fast nonconvex lightcurve-only inversion, but any shape models resulting from it should only be taken as illustrative global-scale ones.Comment: 11 pages, submitted to A&

    Variational Data Assimilation via Sparse Regularization

    Get PDF
    This paper studies the role of sparse regularization in a properly chosen basis for variational data assimilation (VDA) problems. Specifically, it focuses on data assimilation of noisy and down-sampled observations while the state variable of interest exhibits sparsity in the real or transformed domain. We show that in the presence of sparsity, the â„“1\ell_{1}-norm regularization produces more accurate and stable solutions than the classic data assimilation methods. To motivate further developments of the proposed methodology, assimilation experiments are conducted in the wavelet and spectral domain using the linear advection-diffusion equation

    Parallelizable sparse inverse formulation Gaussian processes (SpInGP)

    Full text link
    We propose a parallelizable sparse inverse formulation Gaussian process (SpInGP) for temporal models. It uses a sparse precision GP formulation and sparse matrix routines to speed up the computations. Due to the state-space formulation used in the algorithm, the time complexity of the basic SpInGP is linear, and because all the computations are parallelizable, the parallel form of the algorithm is sublinear in the number of data points. We provide example algorithms to implement the sparse matrix routines and experimentally test the method using both simulated and real data.Comment: Presented at Machine Learning in Signal Processing (MLSP2017
    • …
    corecore