645 research outputs found

    Uniformly root-NN consistent density estimators for weakly dependent invertible linear processes

    Full text link
    Convergence rates of kernel density estimators for stationary time series are well studied. For invertible linear processes, we construct a new density estimator that converges, in the supremum norm, at the better, parametric, rate n1/2n^{-1/2}. Our estimator is a convolution of two different residual-based kernel estimators. We obtain in particular convergence rates for such residual-based kernel estimators; these results are of independent interest.Comment: Published at http://dx.doi.org/10.1214/009053606000001352 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    ROBUST COVARIANCE MATRIX ESTIMATION: "HAC" Estimates with Long Memory/Antipersistence Correction

    Get PDF
    Smoothed nonparametric estimates of the spectral density matrix at zero frequency have been widely used in econometric inference, because they can consistently estimate the covariance matrix of a partial sum of a possibly dependent vector process. When elements of the vector process exhibit long memory or antipersistence such estimates are inconsistent. We propose estimates which are still consistent in such circumstances, adapting automatically to memory parameters that can vary across the vector and be unknown.Covariance matrix estimation, long memory, antipersistence correction, "HAC" estimates, vector process, spectral density.

    The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference

    Get PDF
    Background: Wiener-Granger causality (“G-causality”) is a statistical notion of causality applicable to time series data, whereby cause precedes, and helps predict, effect. It is defined in both time and frequency domains, and allows for the conditioning out of common causal influences. Originally developed in the context of econometric theory, it has since achieved broad application in the neurosciences and beyond. Prediction in the G-causality formalism is based on VAR (Vector AutoRegressive) modelling. New Method: The MVGC Matlab c Toolbox approach to G-causal inference is based on multiple equivalent representations of a VAR model by (i) regression parameters, (ii) the autocovariance sequence and (iii) the cross-power spectral density of the underlying process. It features a variety of algorithms for moving between these representations, enabling selection of the most suitable algorithms with regard to computational efficiency and numerical accuracy. Results: In this paper we explain the theoretical basis, computational strategy and application to empirical G-causal inference of the MVGC Toolbox. We also show via numerical simulations the advantages of our Toolbox over previous methods in terms of computational accuracy and statistical inference. Comparison with Existing Method(s): The standard method of computing G-causality involves estimation of parameters for both a full and a nested (reduced) VAR model. The MVGC approach, by contrast, avoids explicit estimation of the reduced model, thus eliminating a source of estimation error and improving statistical power, and in addition facilitates fast and accurate estimation of the computationally awkward case of conditional G-causality in the frequency domain. Conclusions: The MVGC Toolbox implements a flexible, powerful and efficient approach to G-causal inference. Keywords: Granger causality, vector autoregressive modelling, time series analysi

    Essays on spatial autoregressive models with increasingly many parameters

    Get PDF
    Much cross-sectional data in econometrics is blighted by dependence across units. A solution to this problem is the use of spatial models that allow for an explicit form of dependence across space. This thesis studies problems related to spatial models with increasingly many parameters. A large proportion of the thesis concentrates on Spatial Autoregressive (SAR) models with increasing dimension. Such models are frequently used to model spatial correlation, especially in settings where the data are irregularly spaced. Chapter 1 provides an introduction and background material for the thesis. Chapter 2 develops consistency and asymptotic normality of least squares and instrumental variables (IV) estimates for the parameters of a higher-order spatial autoregressive (SAR) model with regressors. The order of the SAR model and the number of regressors are allowed to approach infinity with sample size, and the permissible rate of growth of the dimension of the parameter space relative to sample size is studied. An alternative to least squares or IV is to use the Gaussian pseudo maximum likelihood estimate (PMLE), studied in Chapter 3. However, this is plagued by finitesample problems due to the implicit definition of the estimate, these being exacerbated by the increasing dimension of the parameter space. A computationally simple Newton type step is used to obtain estimates with the same asymptotic properties as those of the PMLE. Chapters 4 and 5 of the thesis deal with spatial models on an equally spaced, d dimensional lattice. We study the covariance structure of stationary random fields defined on d-dimensional lattices in detail and use the analysis to extend many results from time series. Our main theorem concerns autoregressive spectral density estimation. Stationary random fields on a regularly spaced lattice have an infinite autoregressive representation if they are also purely non-deterministic. We use truncated versions of the AR representation to estimate the spectral density and establish uniform consistency of the proposed spectral density estimate

    Noncausal vector autoregression

    Get PDF
    In this paper, we propose a new noncausal vector autoregressive (VAR) model for non-Gaussian time series. The assumption of non-Gaussianity is needed for reasons of identifiability. Assuming that the error distribution belongs to a fairly general class of elliptical distributions, we develop an asymptotic theory of maximum likelihood estimation and statistical inference. We argue that allowing for noncausality is of importance in empirical economic research, which currently uses only conventional causal VAR models. Indeed, if noncausality is incorrectly ignored, the use of a causal VAR model may yield suboptimal forecasts and misleading economic interpretations. This is emphasized in the paper by noting that noncausality is closely related to the notion of nonfundamentalness, under which structural economic shocks cannot be recovered from an estimated causal VAR model. As detecting nonfundamentalness is therefore of great importance, we propose a procedure for discriminating between causality and noncausality that can be seen as a test of nonfundamentalness. The methods are illustrated with applications to fiscal foresight and the term structure of interest rates.elliptic distribution; fiscal foresight; maximum likelihood estimation; noncausal; nonfundamentalness; non-Gaussian; term structure of interest rates

    Is Consumption Too Smooth?

    Get PDF
    For thirty years, it has been accepted that consumption is smooth because permanent income is smoother than measured income. This paper considers the evidence for the contrary position, that permanent income is in fact less smooth than measured income, so that the smoothness of consumption cannot be straightforwardly explained by permanent income theory. Quarterly first differences of labor income in the United States are well described by an AR(1) with a positive autoregressive parameter. Innovations to such a process are "more than permanent;" there is no deterministic trend to which the series must eventually return, and good or bad fortune in one period can be expected to be at least partially repeated in the next. Changes to permanent income should therefore be greater than the innovations to measured income, and changes in consumption should be more variable than innovations to measured income. In fact, changes in consumption are much less variable than are income innovations. We consider two possible explanations for this paradox, first, that innovations to labor income are in reality much less persistent than appears from an AR(l), and second, that consumers have more information than do econometricians, so that only a fraction of the estimated innovations are actually unexpected by consumers. The univariate time series results are less than decisive, but the balance of the evidence, whether from fitting ARMA models or from examining the spectral density, is more favorable to the view that innovations are persistent than to the opposite view, that there is slow reversion to trend. The information question is taken up within a bivariate model of income and savings that can accommodate the feedback from saving to income that is predicted by the permanent income theory if consumers have superior information. Nevertheless, our results are the same; changes in consumption are typically smaller than those warranted by the change in permanent income. We show that our finding of "excess smoothness" is consistent with the earlier findings of "excess sensitivity" of consumption to income. Our analysis is conducted within a "logarithmic" version of the permanent income hypothesis, a formulation that recognizes that rates of growth of income and saving ratios have greater claim to stationarity than do changes in income and saving flows.
    corecore