1,610 research outputs found

    Issues Concerning the Approximation Underlying the Spectral Representation Theorem

    Get PDF
    In many important textbooks the formal statement of the Spectral RepresentationTheorem is followed by a process version, usually informal, stating thatany stationary stochastic process g is the limit in quadratic mean of asequence of processes, each consisting of a finite sum of harmonicoscillations with stochastic weights. The natural issues, whether the approximationerror is stationary, or whether at least it converges to zero uniformly int , have not been explicitly addressed in the literature. The paper shows that in allrelevant cases, for T unbounded the process convergence is not uniform in t. Equivalently, when T is unbounded the numberof harmonic oscillations necessary to approximate a stationary stochastic process with a preassigned accuracydepends on t . The conclusion is that the process version of the Spectral RepresentationTheorem should explicitely mention that in general the approximation of a stationary stochastic processby a finite sum of harmonic oscillations, given the accuracy, is valid for t belongingto a bounded subset of the real axis (of the set of integers in the discrete-parametercase).Stochastic processes. Stationarity. Spectral analysis.

    A Dynamic Factor Analysis of the Response of U.S. Interest Rates to News

    Get PDF
    This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identi.cation scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identifying a unique effect of monetary policy in the funds rate at the daily frequency.

    A dynamic factor analysis of the response of U. S. interest rates to news

    Get PDF
    This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) and Forni, Giannone, Lippi and Reichlin (2004) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identification scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identify a unique effect of monetary policy in the funds rate at the daily frequency.Interest rates

    Opening the black box: structural factor models with large cross-sections

    Get PDF
    This paper shows how large-dimensional dynamic factor models are suitable for structural analysis. We establish sufficient conditions for identification of the structural shocks and the associated impulse response functions. In particular, we argue that, if the data follow an approximate factor structure, the ā€œproblem of fundamentalnessā€, which is intractable in structural VARs, can be solved provided that the impulse responses are sufficiently heterogeneous. Finally, we propose a consistent method (and n, T rates of convergence) to estimate the impulse-response functions, as well as a bootstrapping procedure for statistical inference. JEL Classification: E0, C1Dynamic Factor Models, fundamentalness, Identification, structural VARs

    Factor models in high-dimensional time series

    Get PDF
    High-dimensional time series may well be the most common type of dataset in the so-called "big data" revolution, and have entered current practice in many areas, including meteorology, genomics, chemometrics, connectomics, complex physics simulations, biological and environmental research, finance and econometrics. The analysis of such datasets poses significant challenges, both from a statistical as from a numerical point of view. The most successful procedures so far have been based on dimension reduction techniques and, more particularly, on high-dimensional factor models. Those models have been developed, essentially, within time series econometrics, and deserve being better known in other areas. In this paper, we provide an original time-domain presentation of the methodological foundations of those models (dynamic factor models usually are described via a spectral approach), contrasting such concepts as commonality and idiosyncrasy, factors and common shocks, dynamic and static principal components. That time-domain approach emphasizes the fact that, contrary to the static factor models favored by practitioners, the so-called general dynamic factor model essentially does not impose any constraints on the data-generating process, but follows from a general representation result

    The Generalized Dynamic Factor Model. One-Sided Estimation and Forecasting

    Get PDF
    This paper proposes a new forecasting method that exploits information from a largepanel of time series. The method is based on the generalized dynamic factor model proposedin Forni, Hallin, Lippi, and Reichlin (2000), and takes advantage of the information onthe dynamic covariance structure of the whole panel. We first use our previous method toobtain an estimation for the covariance matrices of common and idiosyncratic components.The generalized eigenvectors of this couple of matrices are then used to derive a consistentestimate of the optimal forecast. This two-step approach solves the end-of-sample problemscaused by two-sided filtering (as in our previous work), while retaining the advantages of anestimator based on dynamic information. The relative merits of our method and the oneproposed by Stock and Watson (2002) are discussed.Dynamic factor models,principal components, time series, large cross-sections, panel data, forecasting.

    A dynamic factor analysis of the response of US interest rates to news

    Full text link
    This paper uses a dynamic factor model recently studied by Forni, Hallin, Lippi and Reichlin (2000) to analyze the response of 21 U.S. interest rates to news. Using daily data, we find that the news that affects interest rates daily can be summarized by two common factors. This finding is robust to both the sample period and time aggregation. Each rate has an important idiosyncratic component; however, the relative importance of the idiosyncratic component declines as the frequency of the observations is reduced, and nearly vanishes when rates are observed at the monthly frequency. Using an identification scheme that allows for the fact that when policy actions are unknown to the market the funds rate should respond first to policy actions, we are unable to identifying a unique effect of monetary policy in the funds rate at the daily frequency

    New Eurocoin: Tracking Economic Growth in Real Time

    Get PDF
    Removal of short-run dynamics from a stationary time series to isolate the medium to long-run component, can be obtained by a band-pass filter. However, band pass filters are infinite moving averages and can therefore deteriorate at the end of the sample. This is a well-known result in the literature isolating the business cycle in integrated series. We show that the same problem arises with our application to stationary time series. In this paper we develop a method to obtain smoothing of a stationary time series by using only contemporaneous values of a large dataset, so that no end-of-sample deterioration occurs. Our construction is based on a special version of Generalized Principal Components, which is designed to use leading variables in the dataset as proxies for missing future values in the variable of interest. Our method is applied to the construction of New Eurocoin, an indicator of economic activity for the euro area. New Eurocoin is an estimate, in real time, of the medium to long-run component of the euro area GDP growth, which performs equally well within and at the end of the sample. As our dataset is monthly and most of the series are updated with a short delay, we are able to produce a monthly, real-time indicator. An assessment of its performance as an approximation of the medium to long-run GDP growth, both in terms of fitting and turning-point signaling, is provided.Coincident Indicator, Band-pass Filter, Large-dataset Factor Models, Generalized Principal Components
    • ā€¦
    corecore