11 research outputs found

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Optimizing the use of InSAR observations in data assimilation problems to estimate reservoir compaction

    Get PDF
    Hydrocarbon production may cause subsidence as a result of the pressure reduction in the gas-producing layer and reservoir compaction. To analyze the process of subsidence and estimate reservoir parameters, we use a particle method to assimilate Interferometric synthetic-aperture radar (InSAR) observations of surface deformation with a conceptual model of reservoir. As example, we use an analytical model of the Groningen gas reservoir based on a geometry representing the compartmentalized structure of the subsurface at the reservoir depth. The efficacy of the particle method becomes less when the degree of freedom is large compared to the ensemble size. This degree of freedom, in turn, varies because of spatial correlation in the observed field. The resolution of the InSAR data and the number of observations affect the performance of the particle method. In this study, we quantify the information in a Sentinel-1 SAR dataset using the concept of Shannon entropy from information theory. We investigate how to best capture the level of detail in model resolved by the InSAR data while maximizing their information content for a data assimilation use. We show that incorrect representation of the existing correlations leads to weight collapse when the number of observation increases, unless the ensemble size growths. However, simulations of mutual information show that we could optimize data reduction by choosing an adequate mesh given the spatial correlation in the observed subsidence. Our approach provides a means to achieve a better information use from available InSAR data reducing weight collapse without additional computational cost

    Optimizing the use of InSAR observations in data assimilation problems to estimate reservoir compaction

    No full text
    Hydrocarbon production may cause subsidence as a result of the pressure reduction in the gas-producing layer and reservoir compaction. To analyze the process of subsidence and estimate reservoir parameters, we use a particle method to assimilate Interferometric synthetic-aperture radar (InSAR) observations of surface deformation with a conceptual model of reservoir. As example, we use an analytical model of the Groningen gas reservoir based on a geometry representing the compartmentalized structure of the subsurface at the reservoir depth. The efficacy of the particle method becomes less when the degree of freedom is large compared to the ensemble size. This degree of freedom, in turn, varies because of spatial correlation in the observed field. The resolution of the InSAR data and the number of observations affect the performance of the particle method. In this study, we quantify the information in a Sentinel-1 SAR dataset using the concept of Shannon entropy from information theory. We investigate how to best capture the level of detail in model resolved by the InSAR data while maximizing their information content for a data assimilation use. We show that incorrect representation of the existing correlations leads to weight collapse when the number of observation increases, unless the ensemble size growths. However, simulations of mutual information show that we could optimize data reduction by choosing an adequate mesh given the spatial correlation in the observed subsidence. Our approach provides a means to achieve a better information use from available InSAR data reducing weight collapse without additional computational cost

    Uncertainties in the mean ocean dynamic topography before the launch of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE)

    No full text
    In anticipation of the future observations of the gravity mission Gravity Field and Steady-State Ocean Circulation Explorer (GOCE), the present-day accuracy of mean dynamic topography (MDT) is estimated from both observations and models. A comparison of five observational estimates illustrates that RMS differences in MDT vary from 4.2 to 10.5 cm after low-pass filtering the fields with a Hamming window with wavenumber N = 120 (corresponding to an effective resolution of 167 km). RMS differences in observational MDT reduce to 2.4–8.3 cm for N = 15 (1334 km). Differences in data sources (geoid model, in situ data) are mostly visible in the small-scale oceanic features, while differences in processing (filtering, inverse modeling techniques) are reflected at larger scales. A comparison of seven different numerical ocean models demonstrates that model estimates differ mostly in western boundary currents and in the Antarctic Circumpolar Current. RMS differences between modeled and observed MDT are at best 8.8 cm for N = 120, and reduce to 6.4 cm for N = 15. For models with data assimilation, a minimal RMS difference of 6.6 cm (N = 120) to 3.4 cm (N = 15) is obtained with respect to the observational MDTs. The reduction of differences between MDTs with increasing filtering scales is smaller than expected. While it is expected that GOCE will improve MDT estimates at small spatial scales, improvement of mean sea surface estimates from satellite altimetry may be needed to improve MDT estimates at larger scales
    corecore