371 research outputs found

    Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error

    Get PDF
    It is well understood that dynamic instability is among the primary drivers of forecast uncertainty in chaotic, physical systems. Data assimilation techniques have been designed to exploit this phenomenon, reducing the effective dimension of the data assimilation problem to the directions of rapidly growing errors. Recent mathematical work has, moreover, provided formal proofs of the central hypothesis of the assimilation in the unstable subspace methodology of Anna Trevisan and her collaborators: for filters and smoothers in perfect, linear, Gaussian models, the distribution of forecast errors asymptotically conforms to the unstable-neutral subspace. Specifically, the column span of the forecast and posterior error covariances asymptotically align with the span of backward Lyapunov vectors with nonnegative exponents. Earlier mathematical studies have focused on perfect models, and this current work now explores the relationship between dynamical instability, the precision of observations, and the evolution of forecast error in linear models with additive model error. We prove bounds for the asymptotic uncertainty, explicitly relating the rate of dynamical expansion, model precision, and observational accuracy. Formalizing this relationship, we provide a novel, necessary criterion for the boundedness of forecast errors. Furthermore, we numerically explore the relationship between observational design, dynamical instability, and filter boundedness. Additionally, we include a detailed introduction to the multiplicative ergodic theorem and to the theory and construction of Lyapunov vectors. While forecast error in the stable subspace may not generically vanish, we show that even without filtering, uncertainty remains uniformly bounded due its dynamical dissipation. However, the continuous reinjection of uncertainty from model errors may be excited by transient instabilities in the stable modes of high variance, rendering forecast uncertainty impractically large. In the context of ensemble data assimilation, this requires rectifying the rank of the ensemble-based gain to account for the growth of uncertainty beyond the unstable and neutral subspace, additionally correcting stable modes with frequent occurrences of positive local Lyapunov exponents that excite model errors

    Ensemble Kalman filtering without the intrinsic need for inflation

    Get PDF
    International audienceThe main intrinsic source of error in the ensemble Kalman filter (EnKF) is sampling error. External sources of error, such as model error or deviations from Gaussianity, depend on the dynamical properties of the model. Sampling errors can lead to instability of the filter which, as a consequence, often requires inflation and localization. The goal of this article is to derive an ensemble Kalman filter which is less sensitive to sampling errors. A prior probability density function conditional on the forecast ensemble is derived using Bayesian principles. Even though this prior is built upon the assumption that the ensemble is Gaussian-distributed, it is different from the Gaussian probability density function defined by the empirical mean and the empirical error covariance matrix of the ensemble, which is implicitly used in traditional EnKFs. This new prior generates a new class of ensemble Kalman filters, called finite-size ensemble Kalman filter (EnKF-N). One deterministic variant, the finite-size ensemble transform Kalman filter (ETKF-N), is derived. It is tested on the Lorenz '63 and Lorenz '95 models. In this context, ETKF-N is shown to be stable without inflation for ensemble size greater than the model unstable subspace dimension, at the same numerical cost as the ensemble transform Kalman filter (ETKF). One variant of ETKF-N seems to systematically outperform the ETKF with optimally tuned inflation. However it is shown that ETKF-N does not account for all sampling errors, and necessitates localization like any EnKF, whenever the ensemble size is too small. In order to explore the need for inflation in this small ensemble size regime, a local version of the new class of filters is defined (LETKF-N) and tested on the Lorenz '95 toy model. Whatever the size of the ensemble, the filter is stable. Its performance without inflation is slightly inferior to that of LETKF with optimally tuned inflation for small interval between updates, and superior to LETKF with optimally tuned inflation for large time interval between updates

    Inverse modelling of atmospheric tracers: non-Gaussian methods and second-order sensitivity analysis

    Get PDF
    International audienceFor a start, recent techniques devoted to the reconstruction of sources of an atmospheric tracer at continental scale are introduced. A first method is based on the principle of maximum entropy on the mean and is briefly reviewed here. A second approach, which has not been applied in this field yet, is based on an exact Bayesian approach, through a maximum a posteriori estimator. The methods share common grounds, and both perform equally well in practice. When specific prior hypotheses on the sources are taken into account such as positivity, or boundedness, both methods lead to purposefully devised cost-functions. These cost-functions are not necessarily quadratic because the underlying assumptions are not Gaussian. As a consequence, several mathematical tools developed in data assimilation on the basis of quadratic cost-functions in order to establish a posteriori analysis, need to be extended to this non-Gaussian framework. Concomitantly, the second-order sensitivity analysis needs to be adapted, as well as the computations of the averaging kernels of the source and the errors obtained in the reconstruction. All of these developments are applied to a real case of tracer dispersion: the European Tracer Experiment [ETEX]. Comparisons are made between a least squares cost function (similar to the so-called 4D-Var) approach and a cost-function which is not based on Gaussian hypotheses. Besides, the information content of the observations which is used in the reconstruction is computed and studied on the application case. A connection with the degrees of freedom for signal is also established. As a by-product of these methodological developments, conclusions are drawn on the information content of the ETEX dataset as seen from the inverse modelling point of view

    Optimal redistribution of the background ozone monitoring stations over France

    Get PDF
    International audienceOzone is a harmful air pollutant at ground level, and its concentrations are routinely measured with monitoring networks. The network design problem aims at determining the optimal positioning of the monitoring stations. In this study, the background stations of the French routine pollution monitoring network (BDQA) are partially redistributed over France under a set of design objectives. These background stations report ozone variations at large spatial scale comparable with that of a chemistry-transport model (CTM). The design criterion needs to be defined on a regular grid that covers France, where in general no ozone observations are available for validation. Geostatistical ozone estimation methods are used to extrapolate concentrations to these grid nodes. The geostatistical criteria are introduced to minimize the theoretical error of those geostatistical extrapolations. A physical criterion is also introduced to measure the ability of a network to represent a physical ozone field retrieved from CTM simulations using geostatistical extrapolation methods. A third type of criteria of geometrical nature, e.g. a maximal coverage of the design domain, are based uniquely on the distance between the network stations. To complete the network design methodology, a stochastic optimization method, simulated annealing, is employed in the algorithm to select optimally the stations. Significant improvement with all the proposed criteria has been found for the optimally redistributed network against the original background BDQA network. For instance, the relative improvements in the physical criterion value range from 21% to 32% compared to randomly relocated networks. Different design criteria lead to different optimally relocated networks. The optimal networks under physical criteria are the most heterogeneously distributed. More background stations are displaced to the coast, frontiers, and large urban agglomerations, e.g. Paris and Marseilles. The ozone heterogeneous fields are not as well reconstructed from optimal networks under geostatistical or geometrical criteria as from the optimal network obtained with the physical criterion. The values of the physical criterion for the geostatistically and geometrically optimal networks show deteriorations of about 8% and 17% respectively compared to that of the physically optimal network

    Inverse modelling for mercury over Europe

    Get PDF
    International audienceThe fate and transport of mercury over Europe is studied using a regional Eulerian transport model. Because gaseous elemental mercury is a long-lived species in the atmosphere, boundary conditions must be properly taken into account. Ground measurements of gaseous mercury are very sensitive to the uncertainties attached to those forcing conditions. Inverse modelling can help to constrain the forcing fields and help to improve the predicted mercury concentrations. More generally, it allows to reduce the weaknesses of a regional model against a global or hemispherical model for such diffuse trace constituent. Adjoint techniques are employed to relate rigorously and explicitly the measurements to the forcing fields. This way, the inverse problem is clearly defined. Using EMEP measurements of gaseous mercury and performing the inversions, it is shown that boundary conditions can be improved significantly as well as the forecast concentrations. Using inverse modelling to improve the emission inventory is however much more difficult since there are currently not enough mercury monitoring stations, and their location far from Europe centre

    Combining inflation-free and iterative ensemble Kalman filters for strongly nonlinear systems

    Get PDF
    International audienceThe finite-size ensemble Kalman filter (EnKF-N) is an ensemble Kalman filter (EnKF) which, in perfect model condition, does not require inflation because it partially accounts for the ensemble sampling errors. For the Lorenz '63 and '95 toy-models, it was so far shown to perform as well or better than the EnKF with an optimally tuned inflation. The iterative ensemble Kalman filter (IEnKF) is an EnKF which was shown to perform much better than the EnKF in strongly nonlinear conditions, such as with the Lorenz '63 and '95 models, at the cost of iteratively updating the trajectories of the ensemble members. This article aims at further exploring the two filters and at combining both into an EnKF that does not require inflation in perfect model condition, and which is as efficient as the IEnKF in very nonlinear conditions. In this study, EnKF-N is first introduced and a new implementation is developed. It decomposes EnKF-N into a cheap two-step algorithm that amounts to computing an optimal inflation factor. This offers a justification of the use of the inflation technique in the traditional EnKF and why it can often be efficient. Secondly, the IEnKF is introduced following a new implementation based on the Levenberg-Marquardt optimisation algorithm. Then, the two approaches are combined to obtain the finite-size iterative ensemble Kalman filter (IEnKF-N). Several numerical experiments are performed on IEnKF-N with the Lorenz '95 model. These experiments demonstrate its numerical efficiency as well as its performance that offer, at least, the best of both filters. We have also selected a demanding case based on the Lorenz '63 model that points to ways to improve the finite-size ensemble Kalman filters. Eventually, IEnKF-N could be seen as the first brick of an efficient ensemble Kalman smoother for strongly nonlinear systems

    Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    Get PDF
    International audienceThe reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April–1 May) and again a release, longer but less intense than the initial one (2 May–6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m)

    Joint state and parameter estimation with an iterative ensemble Kalman smoother

    Get PDF
    International audienceBoth ensemble filtering and variational data assimilation methods have proven useful in the joint estimation of state variables and parameters of geophysical models. Yet, their respective benefits and drawbacks in this task are distinct. An ensemble variational method, known as the iterative ensemble Kalman smoother (IEnKS) has recently been introduced. It is based on an adjoint model-free variational, but flow-dependent, scheme. As such, the IEnKS is a candidate tool for joint state and parameter estimation that may inherit the benefits from both the ensemble filtering and variational approaches. In this study, an augmented state IEnKS is tested on its estimation of the forcing parameter of the Lorenz-95 model. Since joint state and parameter estimation is especially useful in applications where the forcings are uncertain but nevertheless determining, typically in atmospheric chemistry, the augmented state IEnKS is tested on a new low-order model that takes its meteorological part from the Lorenz-95 model, and its chemical part from the advection diffusion of a tracer. In these experiments, the IEnKS is compared to the ensemble Kalman filter, the ensemble Kalman smoother, and a 4D-Var, which are considered the methods of choice to solve these joint estimation problems. In this low-order model context, the IEnKS is shown to significantly outperform the other methods regardless of the length of the data assimilation win- dow, and for present time analysis as well as retrospective analysis. Besides which, the performance of the IEnKS is even more striking on parameter estimation; getting close to the same performance with 4D-Var is likely to require both a long data assimilation window and a complex modeling of the background statistics
    • …
    corecore