11 research outputs found
Examining the Potential Impact of SWOT Observations In an Ocean Analysis-Forecasting System
NASA\u27s Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide observations of sea surface height anomaly (SSHA) at a significantly higher spatial resolution than current satellite altimeters. This new observation type is expected to improve the ocean model mesoscale circulation. The potential improvement that SWOT will provide is investigated in this work by way of twin-data assimilation experiments using the Navy Coastal Ocean Model four-dimensional variational data assimilation (NCOM-4DVAR) system in its weak constraint formulation. Simulated SWOT observations are sampled from an ocean model run (referred to as the nature run) using an observation-simulator program provided by the SWOT science team. The SWOT simulator provides realistic spatial coverage, resolution, and noise characteristics based on the expected performance of the actual satellite. Twin-data assimilation experiments are run for a two-month period during which simulated observations are assimilated into a separate model (known as the background model) in a series of 96-h windows. The final condition of each analysis window is used to initialize a new 96-h forecast, and each forecast is compared to the nature run to determine the impact of the assimilated data. It is demonstrated here that the simulated SWOT observations help to constrain the model mesoscale to be more consistent with the nature run than the assimilation of traditional altimeter observations alone. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model forecast of mesoscale features and surface ocean velocity
Recommended from our members
Generalized Inverse of a Reduced Gravity Primitive Equation Ocean Model and Tropical Atmosphere–Ocean Data
A nonlinear 2½-layer reduced gravity primitive equations (PE) ocean model is used to assimilate sea surface
temperature (SST) data from the Tropical Atmosphere–Ocean (TAO) moored buoys in the tropical Pacific. The
aim of this project is to hindcast cool and warm events of this part of the ocean, on seasonal to interannual
timescales.
The work extends that of Bennett et al., who used a modified Zebiak–Cane coupled model. They were able
to fit a year of 30-day averaged TAO data to within measurement errors, albeit with significant initial and
dynamical residuals. They assumed a 100-day decorrelation timescale for the dynamical residuals. This long
timescale for the residuals reflects the neglect of resolvable processes in the intermediate coupled model, such
as horizontal advection of momentum. However, the residuals in the nonlinear PE model should be relatively
short timescale errors in parameterizations. The scales for these residuals are crudely estimated from the upper
ocean turbulence studies of Peters et al. and Moum.
The assimilation is performed by minimizing a weighted least squares functional expressing the misfits to the
data and to the model throughout the tropical Pacific and for 18 months. It is known that the minimum lies in
the ‘‘data subspace’’ of the state or solution space. The minimum is therefore sought in the data subspace, by
using the representer method to solve the Euler–Lagrange (EL) system. Although the vector space decomposition
and solution method assume a linear EL system, the concept and technique are applied to the nonlinear EL
system (resulting from the nonlinear PE model), by iterating with linear approximations to the nonlinear EL
system. As a first step, the authors verify that sequences of solutions of linear iterates of the forward PE model
do converge. The assimilation is also used as a significance test of the hypothesized means and covariances of
the errors in the initial conditions, dynamics, and data. A ‘‘strong constraint’’ inverse solution is computed.
However, it is outperformed by the ‘‘weak constraint’’ inverse.
A cross validation by withheld data is presented, as well as an inversion with the model forced by the Florida
State University winds, in place of a climatological wind forcing used in the former inversions
Efficient Implementation of Covariance Multiplication for Data Assimilation With the Representer Method
The application of Markov time correlation within the representer method is revised. Usually, the Markov time correlation is implemented by solving a two-point boundary-value problem that is split into two Langevin equations, one forward and the other backward in time. A new splitting of the two-point boundary-value problem is proposed. An examination of time and memory consumption is carried out when portions of the trajectory are written and read from available disk space, and when checkpoints are used. It is shown that the new splitting will reduce the computation time and the core computer memory required to solve the variational data assimilation problem through the representer method. (C) 2004 Elsevier Ltd. All rights reserved
The Representer Method, the Ensemble Kalman Filter and the Ensemble Kalman Smoother: A Comparison Study Using a Nonlinear Reduced Gravity Ocean Model
This paper compares contending advanced data assimilation algorithms using the same dynamical model and measurements. Assimilation experiments use the ensemble Kalman filter (EnKF), the ensemble Kalman smoother (EnKS) and the representer method involving a nonlinear model and synthetic measurements of a mesoscale eddy. Twin model experiments provide the truth and assimilated state. The difference between truth and assimilation state is a mispositioning of an eddy in the initial state affected by a temporal shift. The systems are constructed to represent the dynamics, error covariances and data density as similarly as possible, though because of the differing assumptions in the system derivations subtle differences do occur. The results reflect some of these differences in the tangent linear assumption made in the representer adjoint and the temporal covariance of the EnKF, which does not correct initial condition errors. These differences are assessed through the accuracy of each method as a function of measurement density. Results indicate that these methods are comparably accurate for sufficiently dense measurement networks; and each is able to correct the position of a purposefully misplaced mesoscale eddy. As measurement density is decreased, the EnKS and the representer method retain accuracy longer than the EnKF. While the representer method is more accurate than the sequential methods within the time period covered by the observations (particularly during the first part of the assimilation time), the representer method is less accurate during later times and during the forecast time period for sparse networks as the tangent linear assumption becomes less accurate. Furthermore, the representer method proves to be significantly more costly (2-4 times) than the EnKS and EnKF even with only a few outer iterations of the iterated indirect representer method. (c) 2005 Elsevier Ltd. All rights reserved
Incremental Projection Approach of Regularization for Inverse Problems
This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method
Examining the Potential Impact of SWOT Observations in an Ocean Analysis–Forecasting System
NASA\u27s Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide observations of sea surface height anomaly (SSHA) at a significantly higher spatial resolution than current satellite altimeters. This new observation type is expected to improve the ocean model mesoscale circulation. The potential improvement that SWOT will provide is investigated in this work by way of twin-data assimilation experiments using the Navy Coastal Ocean Model four-dimensional variational data assimilation (NCOM-4DVAR) system in its weak constraint formulation. Simulated SWOT observations are sampled from an ocean model run (referred to as the nature run) using an observation-simulator program provided by the SWOT science team. The SWOT simulator provides realistic spatial coverage, resolution, and noise characteristics based on the expected performance of the actual satellite. Twin-data assimilation experiments are run for a two-month period during which simulated observations are assimilated into a separate model (known as the background model) in a series of 96-h windows. The final condition of each analysis window is used to initialize a new 96-h forecast, and each forecast is compared to the nature run to determine the impact of the assimilated data. It is demonstrated here that the simulated SWOT observations help to constrain the model mesoscale to be more consistent with the nature run than the assimilation of traditional altimeter observations alone. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model forecast of mesoscale features and surface ocean velocity
Sensitivity Analysis in Ocean Acoustic Propagation
The sensitivity of acoustic pressure to sound speed is investigated through the application of adjoint-based sensitivity analysis using an acoustic propagation model. The sensitivity analysis is extended to temperature and salinity, by deriving the adjoint of the sound polynomial function of temperature and salinity. Numerical experiments using a range dependent model are carried out in a deep and complex environment at the frequency of 300 Hz. It is shown that through the adjoint sensitivity analysis one can infer reasonable variations of sound speed, and thus temperature and salinity. Successful extension of the sensitivity of acoustic pressure to temperature and salinity implies that acoustic pressure observations in a given range-depth plane can be assimilated into an ocean model using the acoustic propagation model as the observation operator
A Multiscale Approach to High-Resolution Ocean Profile Observations Within a 4DVAR Analysis System
Most ocean data assimilation systems are tuned to process and assimilate observations to constrain features on the order of the mesoscale and larger. Typically this involves removal of observations or computing averaged observations. This procedure, while necessary, eliminates many observations from the analysis step and can reduce the overall effectiveness of a particular observing platform. Simply including these observations is not an option as doing so can produce an overdetermined, ill-conditioned problem that is more difficult to solve. An approach, presented here, aims to avoid such issues while at the same time increasing the number of observations within the assimilation. A two-step assimilation procedure with the four-dimensional variational data assimilation (4DVAR) system is adopted. The first step attempts to constrain the large-scale features by assimilating a set of super observations with appropriate background error correlation scales and error variances. The second step then attempts to correct smaller-scale features by assimilating the full observation set with shorter background error correlation scales and appropriate error variances; here the background state is taken as the analysis from the first step. Results using a real high-density observation set from underwater gliders in the region southeast of Iceland, collected during the 2017 Nordic Recognized Environmental Picture (NREP) experiment, will be shown using the Navy Coastal Ocean Model 4DVAR (NCOM-4DVAR)
A Primer on Global Internal Tide and Internal Gravity Wave Continuum Modeling in HYCOM and MITgcm
In recent years, high-resolution global three-dimensional ocean general circulation models have begun to include astronomical tidal forcing alongside atmospheric forcing. Such models can carry an internal tide field with a realistic amount of nonstationarity, and an internal gravity wave continuum spectrum that compares more closely with observations as model resolution increases. Global internal tide and gravity wave models are important for understanding the three-dimensional geography of ocean mixing, for operational oceanography, and for simulating and interpreting satellite altimeter observations. Here we describe the most important technical details behind such models, including atmospheric forcing, bathymetry, astronomical tidal forcing, self-attraction and loading, quadratic bottom boundary layer drag, parameterized topographic internal wave drag, shallow-water tidal equations, and a brief summary of the theory of linear internal gravity waves. We focus on simulations run with two models, the HYbrid Coordinate Ocean Model (HYCOM) and the Massachusetts Institute of Technology general circulation model (MITgcm). We compare the modeled internal tides and internal gravity wave continuum to satellite altimeter observations, moored observational records, and the predictions of the Garrett-Munk (1975) internal gravity wave continuum spectrum. We briefly examine specific topics of interest, such as tidal energetics, internal tide nonstationarity, and the role of nonlinearities in generating the modeled internal gravity wave continuum. We also describe our first attempts at using a Kalman filter to improve the accuracy of tides embedded within a general circulation model. We discuss the challenges and opportunities of modeling stationary internal tides, non-stationary internal tides, and the internal gravity wave continuum spectrum for satellite altimetry and other applications
Data assimilation considerations for improved ocean predictability during the Gulf of Mexico Grand Lagrangian Deployment (GLAD)
•Extensive drifter observations allow new understanding to data assimilation.•Background error covariance is the point at which assumptions have historically been placed.•Components of background error covariance are tested to determine impact.•Amplitude of background error covariance is a critical factor.•Time correlation in background errors must be considered in 3DVar and 4DVar.Ocean prediction systems rely on an array of assumptions to optimize their data assimilation schemes. Many of these remain untested, especially at smaller scales, because sufficiently dense observations are very rare. A set of 295 drifters deployed in July 2012 in the north-eastern Gulf of Mexico provides a unique opportunity to test these systems down to scales previously unobtainable. In this study, background error covariance assumptions in the 3DVar assimilation process are perturbed to understand the effect on the solution relative to the withheld dense drifter data. Results show that the amplitude of the background error covariance is an important factor as expected, and a proposed new formulation provides added skill. In addition, the background error covariance time correlation is important to allow satellite observations to affect the results over a period longer than one daily assimilation cycle. The results show the new background error covariance formulations provide more accurate placement of frontal positions, directions of currents and velocity magnitudes. These conclusions have implications for the implementation of 3DVar systems as well as the analysis interval of 4DVar systems