191 research outputs found

    Comparison of stochastic parameterizations in the framework of a coupled ocean-atmosphere model

    Full text link
    A new framework is proposed for the evaluation of stochastic subgrid-scale parameterizations in the context of MAOOAM, a coupled ocean-atmosphere model of intermediate complexity. Two physically-based parameterizations are investigated, the first one based on the singular perturbation of Markov operator, also known as homogenization. The second one is a recently proposed parameterization based on the Ruelle's response theory. The two parameterization are implemented in a rigorous way, assuming however that the unresolved scale relevant statistics are Gaussian. They are extensively tested for a low-order version known to exhibit low-frequency variability, and some preliminary results are obtained for an intermediate-order version. Several different configurations of the resolved-unresolved scale separations are then considered. Both parameterizations show remarkable performances in correcting the impact of model errors, being even able to change the modality of the probability distributions. Their respective limitations are also discussed.Comment: 44 pages, 12 figures, 4 table

    Model error and sequential data assimilation. A deterministic formulation

    Full text link
    Data assimilation schemes are confronted with the presence of model errors arising from the imperfect description of atmospheric dynamics. These errors are usually modeled on the basis of simple assumptions such as bias, white noise, first order Markov process. In the present work, a formulation of the sequential extended Kalman filter is proposed, based on recent findings on the universal deterministic behavior of model errors in deep contrast with previous approaches (Nicolis, 2004). This new scheme is applied in the context of a spatially distributed system proposed by Lorenz (1996). It is found that (i) for short times, the estimation error is accurately approximated by an evolution law in which the variance of the model error (assumed to be a deterministic process) evolves according to a quadratic law, in agreement with the theory. Moreover, the correlation with the initial condition error appears to play a secondary role in the short time dynamics of the estimation error covariance. (ii) The deterministic description of the model error evolution, incorporated into the classical extended Kalman filter equations, reveals that substantial improvements of the filter accuracy can be gained as compared with the classical white noise assumption. The universal, short time, quadratic law for the evolution of the model error covariance matrix seems very promising for modeling estimation error dynamics in sequential data assimilation

    Simulating model uncertainty of subgrid-scale processes by sampling model errors at convective scales

    Get PDF
    Ideally, perturbation schemes in ensemble forecasts should be based on the statistical properties of the model errors. Often, however, the statistical properties of these model errors are unknown. In practice, the perturbations are pragmatically modelled and tuned to maximize the skill of the ensemble forecast. In this paper a general methodology is developed to diagnose the model error, linked to a specific physical process, based on a comparison between a target and a reference model. Here, the reference model is a configuration of the ALADIN (Aire Limitée Adaptation Dynamique Développement International) model with a parameterization of deep convection. This configuration is also run with the deep-convection parameterization scheme switched off, degrading the forecast skill. The model error is then defined as the difference of the energy and mass fluxes between the reference model with scale-aware deep-convection parameterization and the target model without deep-convection parameterization. In the second part of the paper, the diagnosed model-error characteristics are used to stochastically perturb the fluxes of the target model by sampling the model errors from a training period in such a way that the distribution and the vertical and multivariate correlation within a grid column are preserved. By perturbing the fluxes it is guaranteed that the total mass, heat and momentum are conserved. The tests, performed over the period 11–20 April 2009, show that the ensemble system with the stochastic flux perturbations combined with the initial condition perturbations not only outperforms the target ensemble, where deep convection is not parameterized, but for many variables it even performs better than the reference ensemble (with scale-aware deep-convection scheme). The introduction of the stochastic flux perturbations reduces the small-scale erroneous spread while increasing the overall spread, leading to a more skillful ensemble. The impact is largest in the upper troposphere with substantial improvements compared to other state-of-the-art stochastic perturbation schemes. At lower levels the improvements are smaller or neutral, except for temperature where the forecast skill is degraded

    Low-frequency variability and heat transport in a low-order nonlinear coupled ocean-atmosphere model

    Full text link
    We formulate and study a low-order nonlinear coupled ocean-atmosphere model with an emphasis on the impact of radiative and heat fluxes and of the frictional coupling between the two components. This model version extends a previous 24-variable version by adding a dynamical equation for the passive advection of temperature in the ocean, together with an energy balance model. The bifurcation analysis and the numerical integration of the model reveal the presence of low-frequency variability (LFV) concentrated on and near a long-periodic, attracting orbit. This orbit combines atmospheric and oceanic modes, and it arises for large values of the meridional gradient of radiative input and of frictional coupling. Chaotic behavior develops around this orbit as it loses its stability; this behavior is still dominated by the LFV on decadal and multi-decadal time scales that is typical of oceanic processes. Atmospheric diagnostics also reveals the presence of predominant low- and high-pressure zones, as well as of a subtropical jet; these features recall realistic climatological properties of the oceanic atmosphere. Finally, a predictability analysis is performed. Once the decadal-scale periodic orbits develop, the coupled system's short-term instabilities --- as measured by its Lyapunov exponents --- are drastically reduced, indicating the ocean's stabilizing role on the atmospheric dynamics. On decadal time scales, the recurrence of the solution in a certain region of the invariant subspace associated with slow modes displays some extended predictability, as reflected by the oscillatory behavior of the error for the atmospheric variables at long lead times.Comment: v1: 41 pages, 17 figures; v2-: 42 pages, 15 figure

    Post-processing through linear regression

    Get PDF
    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred
    • …
    corecore