55 research outputs found

    Towards an integrated observation and modeling system in the New York Bight using variational methods. Part I : 4DVAR data assimilation

    Get PDF
    Author Posting. © The Author(s), 2010. This is the author's version of the work. It is posted here by permission of Elsevier B.V. for personal use, not for redistribution. The definitive version was published in Ocean Modelling 35 (2010): 119-133, doi:10.1016/j.ocemod.2010.08.003.Four-dimensional Variational data assimilation (4DVAR) in the Regional Ocean Modeling System (ROMS) is used to produce a best-estimate analysis of ocean circulation in the New York Bight during spring 2006 by assimilating observations collected by a variety of instruments during an intensive field program. An incremental approach is applied in an overlapped cycling system with 3-day data assimilation window to adjust model initial conditions. The model-observation mismatch for all observed variables is reduced substantially. Comparisons between model forecast and independent observations show improved forecast skill for about 15 days for temperature and salinity, and 2 to 3 days for velocity. Tests assimilating only certain subsets of the data indicate that assimilating satellite sea surface temperature improves the forecast of surface and subsurface temperature but worsens the salinity forecast. Assimilating in situ temperature and salinity from gliders improves the salinity forecast but has little effect on temperature. Assimilating HF-radar surface current data improves the velocity forecast by 1-2 days yet worsens the forecast of subsurface temperature. During some time periods the convergence for velocity is poor as a result of the data assimilation system being unable to reduce errors in the applied winds because surface forcing is not among the control variables. This study demonstrates the capability of 4DVAR data assimilation system to reduce model-observation mismatch and improve forecasts in the coastal ocean, and highlights the value of accurate meteorological forcing.This work was funded by National Science Foundation grant OCE-0238957

    A scalable space-time domain decomposition approach for solving large-scale nonlinear regularized inverse ill-posed problems in 4D variational data assimilation

    Full text link
    We develop innovative algorithms for solving the strong-constraint formulation of four-dimensional variational data assimilation in large-scale applications. We present a space-time decomposition approach that employs domain decomposition along both the spatial and temporal directions in the overlapping case and involves partitioning of both the solution and the operators. Starting from the global functional defined on the entire domain, we obtain a type of regularized local functionals on the set of subdomains providing the order reduction of both the predictive and the data assimilation models. We analyze the algorithm convergence and its performance in terms of reduction of time complexity and algorithmic scalability. The numerical experiments are carried out on the shallow water equation on the sphere according to the setup available at the Ocean Synthesis/Reanalysis Directory provided by Hamburg University.Comment: Received: 10 March 2020 / Revised: 29 November 2021 / Accepted: 7 March 202

    Teaching and Learning of Fluid Mechanics, Volume II

    Get PDF
    This book is devoted to the teaching and learning of fluid mechanics. Fluid mechanics occupies a privileged position in the sciences; it is taught in various science departments including physics, mathematics, mechanical, chemical and civil engineering and environmental sciences, each highlighting a different aspect or interpretation of the foundation and applications of fluids. While scholarship in fluid mechanics is vast, expanding into the areas of experimental, theoretical and computational fluid mechanics, there is little discussion among scientists about the different possible ways of teaching this subject. We think there is much to be learned, for teachers and students alike, from an interdisciplinary dialogue about fluids. This volume therefore highlights articles which have bearing on the pedagogical aspects of fluid mechanics at the undergraduate and graduate level

    Status and future of global and regional ocean prediction systems

    Get PDF
    Operational evolution of global and regional ocean forecasting systems has been extremely significant in recent years. GODAE (Global Ocean Data Assimilation Experiment) Oceanview supports the national research groups providing them with coordination and sharing expertise among the partners. Several systems have been set up and developed pre-operationally and the majority of these are now fully operational; at the present time, they provide medium- and long-term forecasts of the most relevant ocean physical variables. These systems are based on ocean general circulation models (OGCMs) and data assimilation techniques that are able to correct the model with the information inferred from different types of observations. A few systems also incorporate a biogeochemical component coupled with the physical system while others are based on coupled ocean-wave-ice-atmosphere models. The products are routinely validated with observations in order to assess their quality. Data and products implementation and organization, as well as service for the users has been well tried and tested and most of the products are now available  to the users. The interaction with different users is an important factor in the development process. This paper provides a synthetic overview of the GODAE Oceanview prediction systems

    Status and future of global and regional ocean prediction systems

    Get PDF
    Operational evolution of global and regional ocean forecasting systems has been extremely significant in recent years. GODAE (Global Ocean Data Assimilation Experiment) Oceanview supports the national research groups providing them with coordination and sharing expertise among the partners. Several systems have been set up and developed pre-operationally and the majority of these are now fully operational; at the present time, they provide medium- and long-term forecasts of the most relevant ocean physical variables. These systems are based on ocean general circulation models (OGCMs) and data assimilation techniques that are able to correct the model with the information inferred from different types of observations. A few systems also incorporate a biogeochemical component coupled with the physical system while others are based on coupled ocean-wave-ice-atmosphere models. The products are routinely validated with observations in order to assess their quality. Data and products implementation and organization, as well as service for the users has been well tried and tested and most of the products are now available  to the users. The interaction with different users is an important factor in the development process. This paper provides a synthetic overview of the GODAE Oceanview prediction systems.Publisheds201-s2204A. Clima e OceaniJCR Journalope

    Data assimilation sensitivity experiments in the East Auckland Current system using 4D-Var

    Get PDF
    This study analyses data assimilative numerical simulations in an eddy-dominated western boundary current: the East Auckland Current (EAuC). The goal is to assess the impact of assimilating surface and subsurface data into a model of the EAuC via running observing system experiments (OSEs). We used the Regional Ocean Modeling System (ROMS) in conjunction with the 4-dimensional variational (4D-Var) data assimilation scheme to incorporate sea surface height (SSH) and temperature (SST), as well as subsurface temperature, salinity and velocity from three moorings located at the upper, mid and lower continental slope using a 7 d assimilation window. Assimilation of surface fields (SSH and SST) reduced SSH root mean square deviation (RMSD) by 25 % in relation to the non-assimilative (NoDA) run. The inclusion of velocity subsurface data further reduced SSH RMSD up- and downstream the moorings by 18 %–25 %. By improving the representation of the mesoscale eddy field, data assimilation increased complex correlation between modelled and observed velocity in all experiments by at least three times. However, the inclusion of temperature and salinity slightly decreased the velocity complex correlation. The assimilative experiments reduced the SST RMSD by 36 % in comparison to the NoDA run. The lack of subsurface temperature for assimilation led to larger RMSD (&gt;1 ∘C) around 100 m in relation to the NoDA run. Comparisons to independent Argo data also showed larger errors at 100 m in experiments that did not assimilate subsurface temperature data. Withholding subsurface temperature forces near-surface average negative temperature increments to the initial conditions that are corrected by increased net heat flux at the surface, but this had limited or no effect on water temperature at 100 m depth. Assimilation of mooring temperature generates mean positive increments to the initial conditions that reduces 100 m water temperature RMSD. In addition, negative heat flux and positive wind stress curl were generated near the moorings in experiments that assimilated subsurface temperature data. Positive wind stress curl generates convergence and downwelling that can correct interior temperature but might also be responsible for decreased velocity correlations. The few moored CTDs (eight) had little impact in correcting salinity in comparison to independent Argo data. However, using doubled decorrelation length scales of tracers and a 2 d assimilation window improved model salinity and temperature in comparison to Argo profiles throughout the domain. This assimilation configuration, however, led to large errors when subsurface temperature data were not assimilated due to incorrect increments to the subsurface. As all reanalyses show improved model-observation skill relative to HYCOM–NCODA (the model boundary conditions), these results highlight the benefit of numerical downscaling to a regional model of the EAuC.</p

    Model-data fusion in digital twins of large scale dynamical systems

    Get PDF
    Digital twins (DTs) are virtual entities that serve as the real-time digital counterparts of actual physical systems across their life-cycle. In a typical application of DTs, the physical system provides sensor measurements and the DT should incorporate the incoming data and run different simulations to assess various scenarios and situations. As a result, an informed decision can be made to alter the physical system or at least take necessary precautions, and the process is repeated along the system's life-cycle. Thus, the effective deployment of DTs requires fulfilling multi-queries while communicating with the physical system in real-time. Nonetheless, DTs of large-scale dynamical systems, as in fluid flows, come with three grand challenges that we address in this dissertation.First, the high dimensionality makes full order modeling (FOM) methodologies unfeasible due to the associated computational time and memory costs. In this regard, reduced order models (ROMs) can potentially accelerate the forward simulations by orders of magnitude, especially for systems with recurrent spatial structures. However, traditional ROMs yield inaccurate and unstable results for turbulent and convective flows. Therefore, we propose a hybrid variational multi-scale framework that benefits from the locality of modal interactions to deliver accurate ROMs. Furthermore, we adopt a novel physics guided machine learning technique to provide on-the-fly corrections and elevate the trustworthiness of the resulting ROM in the sparse data and incomplete governing equations regimes.Second, complex natural or engineered systems are characterized by multi-scale, multi-physics, and multi-component nature. The efficient simulation of such systems requires quick communication and information sharing between several heterogeneous computing units. In order to address this challenge, we pioneer an interface learning (IL) paradigm to ensure the seamless integration of hierarchical solvers with different scales, physics, abstractions, and geometries without compromising the integrity of the computational setup. We demonstrate the IL paradigm for non-iterative domain decomposition and the FOM-ROM coupling in multi-fidelity computations.Third, fluid flow systems are continuously evolving and thus the validity of the DT should be warranted across varying operating conditions and flow regimes. To do so, we embed data assimilation (DA) techniques to enable the DT to self-adapt based on in-situ observational data and efficiently replicate the physical system. In addition, we combine DA algorithms with machine learning models to build a robust framework that collectively addresses the model closure problem, the error in prior information, and the measurement noise

    Solving regularized nonlinear least-squares problem in dual space with application to variational data assimilation

    Get PDF
    Cette thèse étudie la méthode du gradient conjugué et la méthode de Lanczos pour la résolution de problèmes aux moindres carrés non-linéaires sous déterminés et régularisés par un terme de pénalisation quadratique. Ces problèmes résultent souvent d'une approche du maximum de vraisemblance, et impliquent un ensemble de m observations physiques et n inconnues estimées par régression non linéaire. Nous supposons ici que n est grand par rapport à m. Un tel cas se présente lorsque des champs tridimensionnels sont estimés à partir d'observations physiques, par exemple dans l'assimilation de données appliquée aux modèles du système terrestre. Un algorithme largement utilisé dans ce contexte est la méthode de Gauss- Newton (GN), connue dans la communauté d'assimilation de données sous le nom d'assimilation variationnelle des données quadridimensionnelles. Le procédé GN repose sur la résolution approchée d'une séquence de moindres carrés linéaires optimale dans laquelle la fonction coût non-linéaire des moindres carrés est approximée par une fonction quadratique dans le voisinage de l'itération non linéaire en cours. Cependant, il est bien connu que cette simple variante de l'algorithme de Gauss-Newton ne garantit pas une diminution monotone de la fonction coût et sa convergence n'est donc pas garantie. Cette difficulté est généralement surmontée en utilisant une recherche linéaire (Dennis and Schnabel, 1983) ou une méthode de région de confiance (Conn, Gould and Toint, 2000), qui assure la convergence globale des points critiques du premier ordre sous des hypothèses faibles. Nous considérons la seconde de ces approches dans cette thèse. En outre, compte tenu de la grande échelle de ce problème, nous proposons ici d'utiliser un algorithme de région de confiance particulier s'appuyant sur la méthode du gradient conjugué tronqué de Steihaug-Toint pour la résolution approchée du sous-problème (Conn, Gould and Toint, 2000, p. 133-139) La résolution de ce sous-problème dans un espace à n dimensions (par CG ou Lanczos) est considérée comme l'approche primale. Comme alternative, une réduction significative du coût de calcul est possible en réécrivant l'approximation quadratique dans l'espace à m dimensions associé aux observations. Ceci est important pour les applications à grande échelle telles que celles quotidiennement traitées dans les systèmes de prévisions météorologiques. Cette approche, qui effectue la minimisation de l'espace à m dimensions à l'aide CG ou de ces variantes, est considérée comme l'approche duale. La première approche proposée (Da Silva et al., 1995; Cohn et al., 1998; Courtier, 1997), connue sous le nom de Système d'analyse Statistique de l'espace Physique (PSAS) dans la communauté d'assimilation de données, commence par la minimisation de la fonction de coût duale dans l'espace de dimension m par un CG préconditionné (PCG), puis revient l'espace à n dimensions. Techniquement, l'algorithme se compose de formules de récurrence impliquant des vecteurs de taille m au lieu de vecteurs de taille n. Cependant, l'utilisation de PSAS peut être excessivement coûteuse car il a été remarqué que la fonction de coût linéaire des moindres carrés ne diminue pas monotonement au cours des itérations non-linéaires. Une autre approche duale, connue sous le nom de méthode du gradient conjugué préconditionné restreint (RPCG), a été proposée par Gratton and Tshimanga (2009). Celle-ci génère les mêmes itérations en arithmétique exacte que l'approche primale, à nouveau en utilisant la formule de récurrence impliquant des vecteurs taille m. L'intérêt principal de RPCG est qu'il en résulte une réduction significative de la mémoire utilisée et des coûts de calcul tout en conservant la propriété de convergence souhaitée, contrairement à l'algorithme PSAS. La relation entre ces deux approches duales et la dérivation de préconditionneurs efficaces (Gratton, Sartenaer and Tshimanga, 2011), essentiels pour les problèmes à grande échelle, n'ont pas été abordées par Gratton and Tshimanga (2009). La motivation principale de cette thèse est de répondre à ces questions. En particulier, nous nous intéressons à la conception de techniques de préconditionnement et à une généralisation des régions de confiance qui maintiennent la correspondance une-à-une entre itérations primales et duales, opérant ainsi un calcul éfficace avec un algorithme globalement convergent. ABSTRACT : This thesis investigates the conjugate-gradient method and the Lanczos method for the solution of under-determined nonlinear least-squares problems regularized by a quadratic penalty term. Such problems often result from a maximum likelihood approach, and involve a set of m physical observations and n unknowns that are estimated by nonlinear regression. We suppose here that n is large compared to m. These problems are encountered for instance when three-dimensional fields are estimated from physical observations, as is the case in data assimilation in Earth system models. A widely used algorithm in this context is the Gauss-Newton (GN) method, known in the data assimilation community under the name of incremental four dimensional variational data assimilation. The GN method relies on the approximate solution of a sequence of linear least-squares problems in which the nonlinear least-squares cost function is approximated by a quadratic function in the neighbourhood of the current nonlinear iterate. However, it is well known that this simple variant of the Gauss-Newton algorithm does not ensure a monotonic decrease of the cost function and that convergence is not guaranteed. Removing this difficulty is typically achieved by using a line-search (Dennis and Schnabel, 1983) or trust-region (Conn, Gould and Toint, 2000) strategy, which ensures global convergence to first order critical points under mild assumptions. We consider the second of these approaches in this thesis. Moreover, taking into consideration the large-scale nature of the problem, we propose here to use a particular trust-region algorithm relying on the Steihaug-Toint truncated conjugate-gradient method for the approximate solution of the subproblem (Conn, Gould and Toint, 2000, pp. 133-139). Solving this subproblem in the n-dimensional space (by CG or Lanczos) is referred to as the primal approach. Alternatively, a significant reduction in the computational cost is possible by rewriting the quadratic approximation in the m-dimensional space associated with the observations. This is important for large-scale applications such as those solved daily in weather prediction systems. This approach, which performs the minimization in the m-dimensional space using CG or variants thereof, is referred to as the dual approach. The first proposed dual approach (Courtier, 1997), known as the Physical-space Statistical Analysis System (PSAS) in the data assimilation community starts by solving the corresponding dual cost function in m-dimensional space by a standard preconditioned CG (PCG), and then recovers the step in n-dimensional space through multiplication by an n by m matrix. Technically, the algorithm consists of recurrence formulas involving m-vectors instead of n-vectors. However, the use of PSAS can be unduly costly as it was noticed that the linear least-squares cost function does not monotonically decrease along the nonlinear iterations when applying standard termination. Another dual approach has been proposed by Gratton and Tshimanga (2009) and is known as the Restricted Preconditioned Conjugate Gradient (RPCG) method. It generates the same iterates in exact arithmetic as those generated by the primal approach, again using recursion formula involving m-vectors. The main interest of RPCG is that it results in significant reduction of both memory and computational costs while maintaining the desired convergence property, in contrast with the PSAS algorithm. The relation between these two dual approaches and the question of deriving efficient preconditioners (Gratton, Sartenaer and Tshimanga, 2011), essential when large-scale problems are considered, was not addressed in Gratton and Tshimanga (2009). The main motivation for this thesis is to address these open issues. In particular, we are interested in designing preconditioning techniques and a trust-region globalization which maintains the one-to-one correspondance between primal and dual iterates, thereby offering a cost-effective computation in a globally convergent algorithm
    • …
    corecore