14 research outputs found
DADA: data assimilation for the detection and attribution of weather and climate-related events
A new nudging method for data assimilation, delay‐coordinate nudging, is presented. Delay‐coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time step. Numerical experiments with a low‐order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an unoptimized formulation of the delay‐nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay‐coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal‐to‐decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures
Could old tide gauges help estimate past atmospheric variability?
The surge residual is the non-tidal component of coastal sea level. It responds to the atmospheric circulation, including the direct effect of atmospheric pressure on the sea surface. Tide gauges have been used to measure the sea level in coastal cities for centuries, with many records dating back to the 19th century or even earlier to times when direct pressure observations were scarce. Therefore, these old tide gauge records may be used as indirect observations of sub-seasonal atmospheric variability that are complementary to other sensors such as barometers. To investigate this claim, the present work relies on the tide gauge record of Brest, western France, and on the members of NOAA's 20th Century Reanalysis (20CRv3), which only assimilates surface pressure observations and uses a numerical weather prediction model. Using simple statistical relationships between surge residuals and local atmospheric pressure, we show that the tide gauge record can help to reveal part of the 19th century atmospheric variability that was uncaught by the pressure-observations-based reanalysis, advocating for the use of early tide gauge records to study past storms. In particular, weighting the 80 reanalysis members based on tide gauge observations indicates that a large number of members seem unlikely, which induces corrections of several tens of hectopascals in the Bay of Biscay. Comparisons with independent pressure observations shed light on the strengths and limitations of the methodology, particularly for the case of wind-driven surge residuals. This calls for the future use of a mixed methodology between data-driven tools and physics-based modeling. Our methodology could be applied to use other types of independent observations (not just tide gauges) as a means of weighting reanalysis ensemble members.</p
Nowcasting solar irradiance using an analog method and geostationary satellite images
Accurate forecasting of Global Horizontal Irradiance (GHI) is essential for the integration of the solar resource in an electrical grid. We present a novel data-driven method aimed at delivering up to 6 h hourly probabilistic forecasts of GHI on top of a localized solar energy source. The method does not require calibration to adapt to regional differences in cloud dynamics, and uses only one type of data, covering Europe and Africa. It is thus suited for applications that require a GHI forecast for solar energy sources at different locations with few ground measurements. Cloud dynamics are emulated using an analog method based on 5 years of hourly images of geostationary satellite-derived irradiance, without using any numerical prediction model. This database contains both the images to be compared to the current atmospheric observation and their successors at one or more hours of interval. The physics of the system is emulated statistically, and no numerical prediction model is used. The method is tested on one year of data and five locations in Europe with different climatic conditions. It is compared to persistence (keeping the last observation frozen), ensemble persistence (generating a probabilistic forecast using the last observations) and an adaptive first order vector autoregressive model. As an application, the model is downscaled using ground measurements. In both cases, the analog method outperforms the classical statistical approaches. Results demonstrate the skill of the method in emulating cloud dynamics, and its potential to be coupled with a forecasting algorithm using ground measurements for operational applications
Probability Distributions for Analog-To-Target Distances
AbstractSome properties of chaotic dynamical systems can be probed through features of recurrences, also called analogs. In practice, analogs are nearest neighbors of the state of a system, taken from a large database called the catalog. Analogs have been used in many atmospheric applications including forecasts, downscaling, predictability estimation, and attribution of extreme events. The distances of the analogs to the target state usually condition the performances of analog applications. These distances can be viewed as random variables, and their probability distributions can be related to the catalog size and properties of the system at stake. A few studies have focused on the first moments of return-time statistics for the closest analog, fixing an objective of maximum distance from this analog to the target state. However, for practical use and to reduce estimation variance, applications usually require not just one but many analogs. In this paper, we evaluate from a theoretical standpoint and with numerical experiments the probability distributions of the K shortest analog-to-target distances. We show that dimensionality plays a role on the size of the catalog needed to find good analogs and also on the relative means and variances of the K closest analogs. Our results are based on recently developed tools from dynamical systems theory. These findings are illustrated with numerical simulations of well-known chaotic dynamical systems and on 10-m wind reanalysis data in northwest France. Practical applications of our derivations are shown for forecasts of an idealized chaotic dynamical system and for objective-based dimension reduction using the 10-m wind reanalysis data.</jats:p
Using local dynamics to explain analog forecasting of chaotic systems
AbstractAnalogs are nearest neighbors of the state of a system. By using analogs and their successors in time, one is able to produce empirical forecasts. Several analog forecasting methods have been used in atmospheric applications and tested on well-known dynamical systems. Such methods are often used without reference to theoretical connections with dynamical systems. Yet, analog forecasting can be related to the dynamical equations of the system of interest. This study investigates the properties of different analog forecasting strategies by taking local approximations of the system’s dynamics. We find that analog forecasting performances are highly linked to the local Jacobian matrix of the flow map, and that analog forecasting combined with linear regression allows to capture projections of this Jacobian matrix. Additionally, the proposed methodology allows to efficiently estimate analog forecasting errors, an important component in many applications. Carrying out this analysis also allows to compare different analog forecasting operators, helping to choose which operator is best suited depending on the situation. These results are derived analytically and tested numerically on two simple chaotic dynamical systems. The impact of observational noise and of the number of analogs is evaluated theoretically and numerically.</jats:p
Recommended from our members
DADA: data assimilation for the detection and attribution of weather and climate-related events
We describe a new approach that allows for systematic causal attribution of weather and climate-related events, in near-real time. The method is designed so as to facilitate its implementation at meteorological centers by relying on data and methods that are routinely available when numerically forecasting the weather. We thus show that causal attribution can be obtained as a by-product of data assimilation procedures run on a daily basis to update numerical weather prediction (NWP) models with new atmospheric observations; hence, the proposed methodology can take advantage of the powerful computational and observational capacity of weather forecasting centers. We explain the theoretical rationale of this approach and sketch the most prominent features of a “data assimilation–based detection and attribution” (DADA) procedure. The proposal is illustrated in the context of the classical three-variable Lorenz model with additional forcing. The paper concludes by raising several theoretical and practical questions that need to be addressed to make the proposal operational within NWP centers
Narrowing uncertainties of climate projections using data science tools?
International audienc
Identify the dynamics of climate models using data assimilation and analog predictions
International audienc
Identify the dynamics of climate models using data assimilation and analog predictions
International audienc
