6,508 research outputs found

    Techniques of linear prediction, with application to oceanic and atmospheric fields in the tropical Pacific

    No full text
    The problem of constructing optimal linear prediction models by multivariance regression methods is reviewed. It is well known that as the number of predictors in a model is increased, the skill of the prediction grows, but the statistical significance generally decreases. For predictions using a large number of candidate predictors, strategies are therefore needed to determine optimal prediction models which properly balance the competing requirements of skill and significance. The popular methods of coefficient screening or stepwise regression represent a posteriori predictor selection methods and therefore cannot be used to recover statistically significant models by truncation if the complete model, including all predictors, is statistically insignificant. Higher significance can be achieved only by a priori reduction of the predictor set. To determine the maximum number of predictors which may be meaningfully incorporated in a model, a model hierarchy can be used in which a series of best fit prediction models is constructed for a (prior defined) nested sequence of predictor sets, the sequence being terminated when the significance level either falls below a prescribed limit or reaches a maximum value. The method requires a reliable assessment of model significance. This is characterized by a quadratic statistic which is defined independently of the model skill or artificial skill. As an example, the method is applied to the prediction of sea surface temperature anomalies at Christmas Island (representative of sea surface temperatures in the central equatorial Pacific) and variations of the central and east Pacific Hadley circulation (characterized by the second empirical orthogonal function (EOF) of the meridional component of the trade wind anomaly field) using a general multipleā€timeā€lag prediction matrix. The ordering of the predictors is based on an EOF sequence, defined formally as orthogonal variables in the composite space of all (normalized) predictors, irrespective of their different physical dimensions, time lag, and geographic position. The choice of a large set of 20 predictors at 12 time lags yields significant predictability only for forecast periods of 3 to 5 months. However, a prior reduction of the predictor set to 4 predictors at 10 time lags leads to 95% significant predictions with skill values of the order of 0.4 to 0.7 up to 6 or 8 months. For infinitely long time series the construction of optimal prediction models reduces essentially to the problem of linear system identification. However, the model hierarchies normally considered for the simulation of general linear systems differ in structure from the model hierarchies which appear to be most suitable for constructing pure prediction models. Thus the truncation imposed by statistical significance requirements can result in rather different models for the two cases. The relation between optimal prediction models and linear dynamical models is illustrated by the prediction of eastā€west sea level changes in the equatorial Pacific from wind field anomalies. It is shown that the optimal empirical prediction is statistically consistent in this case with both the firstā€order relaxation and damped oscillator models recently proposed by McWilliams and Gent (but with somewhat different model parameters than suggested by the authors). Thus the data do not allow a distinction between the two physical models; the simplest acceptable model is the firstā€order damped response. Finally, the problem of estimating forecast skill is discussed. It is usually stated that the forecast skill is smaller than the true skill, which in turn is smaller than the hindcast skill, by an amount which in both cases is approximately equal to the artificial skill. However, this result applies to the mean skills averaged over the ensemble of all possible hindcast data sets, given the true model. Under the more appropriate side condition of a given hindcast data set and an unknown true model, the estimation of the forecast skill represents a problem of statistical inference and is dependent on the assumed prior probability distribution of true models. The Bayesian hypothesis of a uniform prior distribution yields an average forecast skill equal to the hindcast skill, but other (equally acceptable) assumptions yield lower forecast skills more compatible with the usual hindcastā€averaged expressio

    The ECMWF Ensemble Prediction System: Looking Back (more than) 25 Years and Projecting Forward 25 Years

    Full text link
    This paper has been written to mark 25 years of operational medium-range ensemble forecasting. The origins of the ECMWF Ensemble Prediction System are outlined, including the development of the precursor real-time Met Office monthly ensemble forecast system. In particular, the reasons for the development of singular vectors and stochastic physics - particular features of the ECMWF Ensemble Prediction System - are discussed. The author speculates about the development and use of ensemble prediction in the next 25 years.Comment: Submitted to Special Issue of the Quarterly Journal of the Royal Meteorological Society: 25 years of ensemble predictio

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    Multivariate Granger Causality and Generalized Variance

    Get PDF
    Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables, but may occur among groups, or "ensembles", of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer new justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy". Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.Comment: added 1 reference, minor change to discussion, typos corrected; 28 pages, 3 figures, 1 table, LaTe

    Numerical action reconstruction of the dynamical history of dark matter haloes in N-body simulations

    Full text link
    We test the ability of the numerical action method (NAM) to recover the individual orbit histories of mass tracers in an expanding universe in a region of radius 26Mpc/h, given the masses and redshift-space coordinates at the present epoch. The mass tracers are represented by dark matter haloes identified in a high resolution N-body simulation of the standard LCDM cosmology. Since previous tests of NAM at this scale have traced the underlying distribution of dark matter particles rather than extended haloes, our study offers an assessment of the accuracy of NAM in a scenario which more closely approximates the complex dynamics of actual galaxy haloes. We show that NAM can recover present-day halo distances with typical errors of less than 3 per cent, compared to 5 per cent errors assuming Hubble flow distances. The total halo mass and the linear bias were both found to be constained at the 50 per cent level. The accuracy of individual orbit reconstructions was limited by the inability of NAM, in some instances, to correctly model the positions of haloes at early times solely on the basis of the redshifts, angular positions, and masses of the haloes at the present epoch. Improvements in the quality of NAM reconstructions may be possible using the present-day three-dimensional halo velocities and distances to further constrain the dynamics. This velocity data is expected to become available for nearby galaxies in the coming generations of observations by SIM and GAIA.Comment: 12 pages, 9 figures. submitted to MNRA

    Mixed finite elements for numerical weather prediction

    Full text link
    We show how two-dimensional mixed finite element methods that satisfy the conditions of finite element exterior calculus can be used for the horizontal discretisation of dynamical cores for numerical weather prediction on pseudo-uniform grids. This family of mixed finite element methods can be thought of in the numerical weather prediction context as a generalisation of the popular polygonal C-grid finite difference methods. There are a few major advantages: the mixed finite element methods do not require an orthogonal grid, and they allow a degree of flexibility that can be exploited to ensure an appropriate ratio between the velocity and pressure degrees of freedom so as to avoid spurious mode branches in the numerical dispersion relation. These methods preserve several properties of the C-grid method when applied to linear barotropic wave propagation, namely: a) energy conservation, b) mass conservation, c) no spurious pressure modes, and d) steady geostrophic modes on the ff-plane. We explain how these properties are preserved, and describe two examples that can be used on pseudo-uniform grids: the recently-developed modified RT0-Q0 element pair on quadrilaterals and the BDFM1-\pdg element pair on triangles. All of these mixed finite element methods have an exact 2:1 ratio of velocity degrees of freedom to pressure degrees of freedom. Finally we illustrate the properties with some numerical examples.Comment: Revision after referee comment
    • ā€¦
    corecore