80 research outputs found
Model error and sequential data assimilation. A deterministic formulation
Data assimilation schemes are confronted with the presence of model errors
arising from the imperfect description of atmospheric dynamics. These errors
are usually modeled on the basis of simple assumptions such as bias, white
noise, first order Markov process. In the present work, a formulation of the
sequential extended Kalman filter is proposed, based on recent findings on the
universal deterministic behavior of model errors in deep contrast with previous
approaches (Nicolis, 2004). This new scheme is applied in the context of a
spatially distributed system proposed by Lorenz (1996). It is found that (i)
for short times, the estimation error is accurately approximated by an
evolution law in which the variance of the model error (assumed to be a
deterministic process) evolves according to a quadratic law, in agreement with
the theory. Moreover, the correlation with the initial condition error appears
to play a secondary role in the short time dynamics of the estimation error
covariance. (ii) The deterministic description of the model error evolution,
incorporated into the classical extended Kalman filter equations, reveals that
substantial improvements of the filter accuracy can be gained as compared with
the classical white noise assumption. The universal, short time, quadratic law
for the evolution of the model error covariance matrix seems very promising for
modeling estimation error dynamics in sequential data assimilation
Post-processing through linear regression
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred
Dynamical properties of MOS forecasts: analysis of the ECMWE operational forecasting system
The dynamical properties of ECMWF operational forecasts corrected by a (linear) model output statistics (MOS) technique are investigated. in light of the analysis performed in the context of low-order chaotic systems. Based on the latter work, the respective roles of the initial condition and model errors on the forecasts can be disentangled. For the temperature forecasted by the ECMWF model over Belgium, it is found that (i) the error amplification arising from the presence of uncertainties in the initial conditions dominates the error dynamics of the "free" atmosphere and (ii) the temperature at 2 m can be partly corrected by the use of the (linear) MOS technique (as expected from earlier works). suggesting that model errors and systematic initial condition biases dominate at the surface. In the latter case. the respective amplitudes of the model errors and systematic initial condition biases corrected by MOS depend on the location of the synoptic station. In addition, for a two-observables MOS scheme, the best second predictor is the temperature predicted at 850 hPa in the central part of the country. while for the coastal zone. it is the sensible heat flux entering in the evolution of the surface temperature. These differences are associated with a dominant problem of vertical temperature interpolation in the central and east parts of the country and a difficulty in assessing correctly the surface heat fluxes on the coastal zone. Potential corrections of these problems using higher-resolution models are also discussed
Preface: Advances in post-processing and blending of deterministic and ensemble forecasts
The special issue on advances in post-processing and blending of deterministic and ensemble forecasts is the outcome of several successful successive sessions organized at the General Assembly of the European Geosciences Union. Statistical post-processing and blending of forecasts are currently topics of important attention and development in many countries to produce optimal forecasts. Ten contributions have been received, covering key aspects of current concerns on statistical post-processing, namely the restoration of inter-variable dependences, the impact of model changes on the statistical relationships and how to cope with it, the operational implementation at forecasting centers, the development of appropriate metrics for forecast verification, and finally two specific applications to snow forecasts and seasonal forecasts of the North Atlantic Oscillation
Causal dependences between the coupled oceanâatmosphere dynamics over the tropical Pacific, the North Pacific and the North Atlantic
The causal dependences (in a dynamical sense) between the dynamics of three
different coupled oceanâatmosphere basins, the North Atlantic, the North
Pacific and the tropical Pacific region (Nino3.4), have been explored using
data from three reanalysis datasets, namely ORA-20C, ORAS4 and ERA-20C. The
approach is based on convergent cross mapping (CCM) developed by
Sugihara et al. (2012) that allows for evaluating the dependences between
variables beyond the classical teleconnection patterns based on correlations.The use of CCM on these data mostly reveals that (i)Â the tropical Pacific
(Nino3.4 region) only influences the dynamics of the North Atlantic region
through its annual climatological cycle; (ii)Â the atmosphere over the North
Pacific is dynamically forcing the North Atlantic on a monthly basis; (iii)
on longer timescales (interannual), the dynamics of the North Pacific and
the North Atlantic are influencing each other through the ocean dynamics,
suggesting a connection through the thermohaline circulation.These findings shed a new light on the coupling between these three different regions of the globe. In
particular, they call for a deep reassessment of the way teleconnections are
interpreted and for a more rigorous way to evaluate dynamical dependences
between the different components of the climate system.</p
Exploring the Lyapunov instability properties of high-dimensional atmospheric and climate models
The stability properties of intermediate-order climate models are investigated by computing their Lyapunov exponents (LEs). The two models considered are PUMA (Portable University Model of the Atmosphere), a primitive-equation simple general circulation model, and MAOOAM (Modular Arbitrary-Order Ocean-Atmosphere Model), a quasi-geostrophic coupled oceanâatmosphere model on a ÎČ-plane. We wish to investigate the effect of the different levels of filtering on the instabilities and dynamics of the atmospheric flows. Moreover, we assess the impact of the oceanic coupling, the dissipation scheme, and the resolution on the spectra of LEs.
The PUMA Lyapunov spectrum is computed for two different values of the meridional temperature gradient defining the Newtonian forcing to the temperature field. The increase in the gradient gives rise to a higher baroclinicity and stronger instabilities, corresponding to a larger dimension of the unstable manifold and a larger first LE. The KaplanâYorke dimension of the attractor increases as well. The convergence rate of the rate function for the large deviation law of the finite-time Lyapunov exponents (FTLEs) is fast for all exponents, which can be interpreted as resulting from the absence of a clear-cut atmospheric timescale separation in such a model.
The MAOOAM spectra show that the dominant atmospheric instability is correctly represented even at low resolutions. However, the dynamics of the central manifold, which is mostly associated with the ocean dynamics, is not fully resolved because of its associated long timescales, even at intermediate orders. As expected, increasing the mechanical atmosphereâocean coupling coefficient or introducing a turbulent diffusion parametrisation reduces the KaplanâYorke dimension and KolmogorovâSinai entropy. In all considered configurations, we are not yet in the regime in which one can robustly define large deviation laws describing the statistics of the FTLEs.
This paper highlights the need to investigate the natural variability of the atmosphereâocean coupled dynamics by associating rate of growth and decay of perturbations with the physical modes described using the formalism of the covariant Lyapunov vectors and considering long integrations in order to disentangle the dynamical processes occurring at all timescales
Recalibrating windâspeed forecasts using regimeâdependent ensemble model output statistics
This is the final version. Available on open access from Wiley via the DOI in this recordRaw output from deterministic numerical weather prediction models is typically subject
to systematic biases. Although ensemble forecasts provide invaluable information
regarding the uncertainty in a prediction, they themselves often misrepresent the
weather that occurs. Given their widespread use, the need for high-quality wind
speed forecasts is well-documented. Several statistical approaches have therefore been
proposed to recalibrate ensembles of wind speed forecasts, including a heteroscedastic
truncated regression approach. An extension to this method that utilises the prevailing
atmospheric flow is implemented here in a quasigeostrophic simulation study and
on GEFS reforecast data, in the hope of alleviating errors owing to changes in
the synoptic-scale atmospheric state. When the wind speed strongly depends on the
underlying weather regime, the resulting forecasts have the potential to provide
substantial improvements in skill upon conventional post-processing techniques. This
is particularly pertinent at longer lead times, where there is more improvement to be
gained upon current methods, and in weather regimes associated with wind speeds that
differ greatly from climatology. In order to realise this potential, an accurate prediction
of the future atmospheric regime is required.Natural Environment Research Council (NERC
Quantitative rainfall analysis of the 2021 mid-July flood event in Belgium
The exceptional flood of July 2021 in central Europe impacted Belgium severely. As rainfall was the triggering factor of this event, this study aims to characterize rainfall amounts in Belgium from 13 to 16 July 2021 based on two types of observational data. First, observations recorded by high-quality rain gauges operated by weather and hydrological services in Belgium have been compiled and quality checked. Second, a radar-based rainfall product has been improved to provide a reliable estimation of quantitative precipitation at high spatial and temporal resolutions over Belgium. Several analyses of these data are performed here to describe the spatial and temporal distribution of rainfall during the event. These analyses indicate that the rainfall accumulations during the event reached unprecedented levels over large areas. Accumulations over durations from 1 to 3âd significantly exceeded the 200-year return level in several places, with up to 90â% of exceedance over the 200-year return level for 2 and 3âd values locally in the Vesdre Basin. Such a record-breaking event needs to be documented as much as possible, and available observational data must be shared with the scientific community for further studies in hydrology, in urban planning and, more generally, in all multi-disciplinary studies aiming to identify and understand factors leading to such disaster. The corresponding rainfall data are therefore provided freely in a supplement (JournĂ©e et al., 2023; Goudenhoofdt et al., 2023).</p
Numerical convergence of the block-maxima approach to the Generalized Extreme Value distribution
In this paper we perform an analytical and numerical study of Extreme Value
distributions in discrete dynamical systems. In this setting, recent works have
shown how to get a statistics of extremes in agreement with the classical
Extreme Value Theory. We pursue these investigations by giving analytical
expressions of Extreme Value distribution parameters for maps that have an
absolutely continuous invariant measure. We compare these analytical results
with numerical experiments in which we study the convergence to limiting
distributions using the so called block-maxima approach, pointing out in which
cases we obtain robust estimation of parameters. In regular maps for which
mixing properties do not hold, we show that the fitting procedure to the
classical Extreme Value Distribution fails, as expected. However, we obtain an
empirical distribution that can be explained starting from a different
observable function for which Nicolis et al. [2006] have found analytical
results.Comment: 34 pages, 7 figures; Journal of Statistical Physics 201
Universal behavior of extreme value statistics for selected observables of dynamical systems
The main results of the extreme value theory developed for the investigation
of the observables of dynamical systems rely, up to now, on the Gnedenko
approach. In this framework, extremes are basically identified with the block
maxima of the time series of the chosen observable, in the limit of infinitely
long blocks. It has been proved that, assuming suitable mixing conditions for
the underlying dynamical systems, the extremes of a specific class of
observables are distributed according to the so called Generalized Extreme
Value (GEV) distribution. Direct calculations show that in the case of
quasi-periodic dynamics the block maxima are not distributed according to the
GEV distribution. In this paper we show that, in order to obtain a universal
behaviour of the extremes, the requirement of a mixing dynamics can be relaxed
if the Pareto approach is used, based upon considering the exceedances over a
given threshold. Requiring that the invariant measure locally scales with a
well defined exponent - the local dimension -, we show that the limiting
distribution for the exceedances of the observables previously studied with the
Gnedenko approach is a Generalized Pareto distribution where the parameters
depends only on the local dimensions and the value of the threshold. This
result allows to extend the extreme value theory for dynamical systems to the
case of regular motions. We also provide connections with the results obtained
with the Gnedenko approach. In order to provide further support to our
findings, we present the results of numerical experiments carried out
considering the well-known Chirikov standard map.Comment: 7 pages, 1 figur
- âŠ