48 research outputs found

    Scientific challenges of convective-scale numerical weather prediction

    Get PDF
    Numerical weather prediction (NWP) models are increasing in resolution and becoming capable of explicitly representing individual convective storms. Is this increase in resolution leading to better forecasts? Unfortunately, we do not have sufficient theoretical understanding about this weather regime to make full use of these NWPs. After extensive efforts over the course of a decade, convective–scale weather forecasts with horizontal grid spacings of 1–5 km are now operational at national weather services around the world, accompanied by ensemble prediction systems (EPSs). However, though already operational, the capacity of forecasts for this scale is still to be fully exploited by overcoming the fundamental difficulty in prediction: the fully three–dimensional and turbulent nature of the atmosphere. The prediction of this scale is totally different from that of the synoptic scale (103 km) with slowly–evolving semi–geostrophic dynamics and relatively long predictability on the order of a few days. Even theoretically, very little is understood about the convective scale compared to our extensive knowledge of the synoptic-scale weather regime as a partial–differential equation system, as well as in terms of the fluid mechanics, predictability, uncertainties, and stochasticity. Furthermore, there is a requirement for a drastic modification of data assimilation methodologies, physics (e.g., microphysics), parameterizations, as well as the numerics for use at the convective scale. We need to focus on more fundamental theoretical issues: the Liouville principle and Bayesian probability for probabilistic forecasts; and more fundamental turbulence research to provide robust numerics for the full variety of turbulent flows. The present essay reviews those basic theoretical challenges as comprehensibly as possible. The breadth of the problems that we face is a challenge in itself: an attempt to reduce these into a single critical agenda should be avoided

    Regularization and tempering for a moment-matching localized particle filter

    No full text
    Iterative ensemble filters and smoothers are now commonly used for geophysical models. Some of these methods rely on a factorization of the observation likelihood function to sample from a posterior density through a set of “tempered” transitions to ensemble members. For Gaussian-based data assimilation methods, tangent linear versions of nonlinear operators can be relinearized between iterations, thus leading to a solution that is less biased than a single-step approach. This study adopts similar iterative strategies for a localized particle filter (PF) that relies on the estimation of moments to adjust unobserved variables based on importance weights. This approach builds off a “regularization” of the local PF, which forces weights to be more uniform through heuristic means. The regularization then leads to an adaptive tempering, which can also be combined with filter updates from parametric methods, such as ensemble Kalman filters. The role of iterations is analyzed by deriving the localized posterior probability density assumed by current local PF formulations and then examining how single-step and tempered PFs sample from this density. From experiments performed with a low-dimensional nonlinear system, the iterative and hybrid strategies show the largest benefits in observation-sparse regimes, where only a few particles contain high likelihoods and prior errors are non-Gaussian. This regime mimics specific applications in numerical weather prediction, where small ensemble sizes, unresolved model error, and highly nonlinear dynamics lead to prior uncertainty that is larger than measurement uncertainty.https://doi.org/10.1002/qj.432
    corecore