28 research outputs found
Predictability of threshold exceedances in dynamical systems
In a low-order model of the general circulation of the atmosphere we examine the predictability of threshold exceedance events of certain observables. The likelihood of such binary events–the cornerstone also for the categoric (as opposed to probabilistic) prediction of threshold exceedances–is established from long time series of one or more observables of the same system. The prediction skill is measured by a summary index of the ROC curve that relates the hit- and false alarm rates. Our results for the examined systems suggest that exceedances of higher thresholds are more predictable; or in other words: rare large magnitude, i.e., extreme, events are more predictable than frequent typical events. We find this to hold provided that the bin size for binning time series data is optimized, but not necessarily otherwise. This can be viewed as a confirmation of a counterintuitive (and seemingly contrafactual) statement that was previously formulated for more simple autoregressive stochastic processes. However, we argue that for dynamical systems in general it may be typical only, but not universally true. We argue that when there is a sufficient amount of data depending on the precision of observation, the skill of a class of data-driven categoric predictions of threshold exceedances approximates the skill of the analogous model-driven prediction, assuming strictly no model errors. Therefore, stronger extremes in terms of higher threshold levels are more predictable both in case of data- and model-driven prediction. Furthermore, we show that a quantity commonly regarded as a measure of predictability, the finite-time maximal Lyapunov exponent, does not correspond directly to the ROC-based measure of prediction skill when they are viewed as functions of the prediction lead time and the threshold level. This points to the fact that even if the Lyapunov exponent as an intrinsic property of the system, measuring the instability of trajectories, determines predictability, it does that in a nontrivial manner
Recommended from our members
Quantifying nonergodicity in nonautonomous dissipative dynamical systems: an application to climate change
In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change
Recommended from our members
Predictability of fat-tailed extremes
We conjecture for a linear stochastic differential equation that the predictability of threshold exceedances (I) improves with the event magnitude when the noise is a so-called correlated additive-multiplicative noise, no matter the nature of the stochastic innovations, and also improves when (II) the noise is purely additive, obeying a distribution that decays fast, i.e., not by a power law, and (III) deteriorates only when the additive noise distribution follows a power law. The predictability is measured by a summary index of the receiver operating characteristic curve. We provide support to our conjecture—to compliment reports in the existing literature on (II)—by a set of case studies. Calculations for the prediction skill are conducted in some cases by a direct numerical time-series-data-driven approach and in other cases by an analytical or semianalytical approach developed here
Recommended from our members
Performance analysis and optimization of a box-hull wave energy converter concept
In this paper we consider a wave energy converter concept which is created by linking a box barge to the mechanical reference by linear dampers. The response to incident wave action in terms of power take-off is expressed explicitly as the solution of a linear frequency-domain model. The simplicity of the model combined with the possibility of the application of theory allows for a nested, and so manageable, procedure of optimization. We find that for any geometry, i.e., a combination of e.g. the breadth-to-length and breadth-to-draught aspect ratios of the box, the optimum is characterized by resonance at least in one of the two degrees of freedom, heave or pitch. Furthermore, optimal geometries turn out to be extremal: either long attenuator-type or wide terminator-type devices perform the best. We find also that optimal wavelengths, which are comparable to the device length in case of attenuators, emerge either due to the progressively increasing buoyancy restoring force characteristic, or due to the finite bandwidth of irregular waves. In particular, diffraction forces are more significant under optimal conditions for performance in irregular seas in comparison with conditions necessary for the most intensive displacement response of the free-floating box barge exposed to regular waves
Can we use linear response theory to assess geoengineering strategies?
Geoengineering can control only some climatic variables but not others, resulting in side-effects. We investigate in an intermediate-complexity climate model the applicability of linear response theory (LRT) to the assessment of a geoengineering method. This application of LRT is twofold. First, our objective (O1) is to assess only the best possible geoengineering scenario by looking for a suitable modulation of solar forcing that can cancel out or otherwise modulate a climate change signal resulting from a rise in CO2 alone. Here we consider only the cancellation of the expected global mean surface air temperature. It is a straightforward inverse problem for this solar forcing, and, considering an infinite time period, we use LRT to provide the solution in the frequency domain in closed form. We provide procedures suitable for numerical implementation that apply to finite time periods too. Second, to be able to use LRT to quantify side-effects, the response with respect to uncontrolled observables, such as regional must be approximately linear. Our objective (O2) here is to assess the linearity of the response. We find that under geoengineering in the sense of (O1) the asymptotic response of the globally averaged temperature is actually not zero. This is due to an inaccurate determination of the linear susceptibilities. The error is due to a significant quadratic nonlinearity of the response. This nonlinear contribution can be easily removed, which results in much better estimates of the linear susceptibility, and, in turn, in a fivefold reduction in the global average surface temperature under geoengineering. This correction dramatically improves also the agreement of the spatial patterns of the predicted and of the true response. However, such an agreement is not perfect and is worse in the case of the precipitation patterns, as a result of greater degree of nonlinearity.Geoengineering can control only some climatic variables but not others, resulting in side-effects. We investigate in an intermediate-complexity climate model the applicability of linear response theory (LRT) to the assessment of a geoengineering method. This application of LRT is twofold. First, our objective (O1) is to assess only the best possible geoengineering scenario by looking for a suitable modulation of solar forcing that can cancel out or otherwise modulate a climate change signal resulting from a rise in CO2 alone. Here we consider only the cancellation of the expected global mean surface air temperature. It is a straightforward inverse problem for this solar forcing, and, considering an infinite time period, we use LRT to provide the solution in the frequency domain in closed form. We provide procedures suitable for numerical implementation that apply to finite time periods too. Second, to be able to use LRT to quantify side-effects, the response with respect to uncontrolled observables, such as regional must be approximately linear. Our objective (O2) here is to assess the linearity of the response. We find that under geoengineering in the sense of (O1) the asymptotic response of the globally averaged temperature is actually not zero. This is due to an inaccurate determination of the linear susceptibilities. The error is due to a significant quadratic nonlinearity of the response. This nonlinear contribution can be easily removed, which results in much better estimates of the linear susceptibility, and, in turn, in a fivefold reduction in the global average surface temperature under geoengineering. This correction dramatically improves also the agreement of the spatial patterns of the predicted and of the true response. However, such an agreement is not perfect and is worse in the case of the precipitation patterns, as a result of greater degree of nonlinearity
Recommended from our members
Probabilistic concepts in a changing climate: a snapshot attractor picture
The authors argue that the concept of snapshot attractors and of their natural probability distributions are the only available tools by means of which mathematically sound statements can be made about averages, variances, etc., for a given time instant in a changing climate. A basic advantage of the snapshot approach, which relies on the use of an ensemble, is that the natural distribution and thus any statistics based on it are independent of the particular ensemble used, provided it is initiated in the past earlier than a convergence time. To illustrate these concepts, a tutorial presentation is given within the framework of a low-order model in which the temperature contrast parameter over a hemisphere decreases linearly in time. Furthermore, the averages and variances obtained from the snapshot attractor approach are demonstrated to strongly differ from the traditional 30-yr temporal averages and variances taken along single realizations. The authors also claim that internal variability can be quantified by the natural distribution since it characterizes the chaotic motion represented by the snapshot attractor. This experience suggests that snapshot-attractor-based calculations might be appropriate to be evaluated in any large-scale climate model, and that the application of 30-yr temporal averages taken along single realizations should be complemented with this more appealing tool for the characterization of climate changes, which seems to be practically feasible with moderate ensemble sizes
Recommended from our members
On the importance of the convergence to climate attractors
Ensemble approaches are becoming widely used in climate research. In contrast to weather forecast, however, in the climatic context one is interested in long-time properties, those arising on the scale of several decades. The well-known strong internal variability of the climate system implies the existence of a related dynamical attractor with chaotic properties. Under the condition of climate change this should be a snapshot attractor, naturally arising in an ensemble-based framework. Although ensemble averages can be evaluated at any instant of time, results obtained during the process of convergence of the ensemble towards the attractor are not relevant from the point of view of climate. In simulations, therefore, attention should be paid to whether the convergence to the attractor has taken place. We point out that this convergence is of exponential character, therefore, in a finite amount of time after initialization relevant results can be obtained. The role of the time scale separation due to the presence of the deep ocean is discussed from the point of view of ensemble simulations
Nonlinear forced change and nonergodicity: The case of ENSO-Indian monsoon and global precipitation teleconnections
We study the forced response of the teleconnection between the El
Nino-Southern Oscillation (ENSO) and global precipitation in general and the
Indian summer monsoon (IM) in particular in the Max Planck Institute Grand
Ensemble. The forced response of the teleconnection is defined as the
time-dependence of a correlation coefficient evaluated over the ensemble. The
ensemble-wise variability is taken either wrt. spatial averages or dominant
spatial modes in the sense of Maximal Covariance Analysis or Canonical
Correlation Analysis or EOF analysis. We find that the strengthening of the
ENSO-IM teleconnection is robustly or consistently featured in view of all four
teleconnection representations, whether sea surface temperature (SST) or sea
level pressure (SLP) is used to characterise ENSO, and both in the historical
period and under the RCP8.5 forcing scenario. The main contributor to this
strengthening in terms of a linear regression model is the regression
coefficient, which can outcompete even a declining ENSO variability in view of
using the SLP. We also find that the forced change of the teleconnection is
typically nonlinear by (1) formally rejecting the hypothesis that ergodicity
holds, i.e., that expected values of temporal correlation coefficients with
respect to the ensemble equal the ensemble-wise correlation coefficient itself,
and also showing that (2) the trivial contributions of the forced changes of
e.g. the mean SST and/or precipitation to temporal correlations are
insignificant here. We also provide, in terms of the test statistics, global
maps of the degree of nonlinearity/nonergodicity of the forced change of the
teleconnection between local precipitation and ENSO