40 research outputs found

    Using Subspace Methods for Estimating ARMA Models for Multivariate Time Series with Conditionally Heteroskedastic Innovations

    Get PDF
    This paper deals with the estimation of linear dynamic models of the ARMA type for the conditional mean for time series with conditionally heteroskedastic innovation process widely used in modelling financial time series. Estimation is performed using subspace methods which are known to have computational advantages as compared to prediction error methods based on criterion minimization. These advantages are especially strong for high dimensional time series. The subspace methods are shown to provide consistent estimators. Moreover asymptotic equivalence to prediction error estimators in terms of the asymptotic variance is proved. Also order estimation techniques are proposed and analyzed. The estimators are not efficient as they do not model the conditional variance. Nevertheless, they can be used to obtain consistent estimators of the innovations. In a second step these estimated residuals can be used in order to levitate the problem of specifying the variance model in particular in the multi-output case. This is demonstrated in an ARCH setting, where it is proved that the estimated innovations can be used in place of the true innovations for testing in a linear least squares context in order to specify the structure of the ARCH model without changing the asymptotic distribution.Multivariate models, conditional heteroskedasticity, ARMA systems, subspace methods

    Forecasting VARMA processes using VAR models and subspace-based state space models

    Get PDF
    VAR modelling is a frequent technique in econometrics for linear processes. VAR modelling offers some desirable features such as relatively simple procedures for model specification (order selection) and the possibility of obtaining quick non-iterative maximum likelihood estimates of the system parameters. However, if the process under study follows a finite-order VARMA structure, it cannot be equivalently represented by any finite-order VAR model. On the other hand, a finite-order state space model can represent a finite-order VARMA process exactly, and, for state-space modelling, subspace algorithms allow for quick and non-iterative estimates of the system parameters, as well as for simple specification procedures. Given the previous facts, we check in this paper whether subspace-based state space models provide better forecasts than VAR models when working with VARMA data generating processes. In a simulation study we generate samples from different VARMA data generating processes, obtain VAR-based and state-space-based models for each generating process and compare the predictive power of the obtained models. Different specification and estimation algorithms are considered; in particular, within the subspace family, the CCA (Canonical Correlation Analysis) algorithm is the selected option to obtain state-space models. Our results indicate that when the MA parameter of an ARMA process is close to 1, the CCA state space models are likely to provide better forecasts than the AR models. We also conduct a practical comparison (for two cointegrated economic time series) of the predictive power of Johansen restricted-VAR (VEC) models with the predictive power of state space models obtained by the CCA subspace algorithm, including a density forecasting analysis.subspace algorithms; VAR; forecasting; cointegration; Johansen; CCA

    Using Subspace Methods for Estimating ARMA Models for Multivariate Time Series with Conditionally Heteroskedastic Innovations

    Get PDF
    This paper deals with the estimation of linear dynamic models of the ARMA type for the conditional mean for time series with conditionally heteroskedastic innovation process widely used in modelling financial time series. Estimation is performed using subspace methods which are known to have computational advantages as compared to prediction error methods based on criterion minimization. These advantages are especially strong for high dimensional time series. The subspace methods are shown to provide consistent estimators. Moreover asymptotic equivalence to prediction error estimators in terms of the asymptotic variance is proved. Also order estimation techniques are proposed and analyzed. The estimators are not efficient as they do not model the conditional variance. Nevertheless, they can be used to obtain consistent estimators of the innovations. In a second step these estimated residuals can be used in order to levitate the problem of specifying the variance model in particular in the multi-output case. This is demonstrated in an ARCH setting, where it is proved that the estimated innovations can be used in place of the true innovations for testing in a linear least squares context in order to specify the structure of the ARCH model without changing the asymptotic distribution

    A Dynamic Factor Analysis of Financial Contagion in Asia

    Get PDF
    In this paper we compared the performance of country specific and regional indicators of reserve adequacy in predicting, out of sample, the balance of payment crisis affecting the South East Asian region during the 1997-98 period. A Dynamic Factor method was used to retrieve reserve adequacy indicators. The empirical findings suggest clear evidence of financial contagion.Financial contagion, Dynamic factor model

    Linear parameter-varying subspace identification: A unified framework

    Get PDF
    In this paper, we establish a unified framework for subspace identification (SID) of linear parameter-varying (LPV) systems to estimate LPV state-space (SS) models in innovation form. This framework enables us to derive novel LPV SID schemes that are extensions of existing linear time-invariant (LTI) methods. More specifically, we derive the open-loop, closed-loop, and predictor-based data-equations, an input-output surrogate form of the SS representation, by systematically establishing an LPV subspace identification theory. We show the additional challenges of the LPV setting compared to the LTI case. Based on the data-equations, several methods are proposed to estimate LPV-SS models based on a maximum-likelihood or a realization based argument. Furthermore, the established theoretical framework for the LPV subspace identification problem allows us to lower the number of to-be-estimated parameters and to overcome dimensionality problems of the involved matrices, leading to a decrease in the computational complexity of LPV SIDs in general. To the authors' knowledge, this paper is the first in-depth examination of the LPV subspace identification problem. The effectiveness of the proposed subspace identification methods are demonstrated and compared with existing methods in a Monte Carlo study of identifying a benchmark MIMO LPV system.Comment: 15 pages, 2 figures, 2 table

    Forecasting VARMA processes using VAR models and subspace-based state space models

    Get PDF
    VAR modelling is a frequent technique in econometrics for linear processes. VAR modelling offers some desirable features such as relatively simple procedures for model specification (order selection) and the possibility of obtaining quick non-iterative maximum likelihood estimates of the system parameters. However, if the process under study follows a finite-order VARMA structure, it cannot be equivalently represented by any finite-order VAR model. On the other hand, a finite-order state space model can represent a finite-order VARMA process exactly, and, for state-space modelling, subspace algorithms allow for quick and non-iterative estimates of the system parameters, as well as for simple specification procedures. Given the previous facts, we check in this paper whether subspace-based state space models provide better forecasts than VAR models when working with VARMA data generating processes. In a simulation study we generate samples from different VARMA data generating processes, obtain VAR-based and state-space-based models for each generating process and compare the predictive power of the obtained models. Different specification and estimation algorithms are considered; in particular, within the subspace family, the CCA (Canonical Correlation Analysis) algorithm is the selected option to obtain state-space models. Our results indicate that when the MA parameter of an ARMA process is close to 1, the CCA state space models are likely to provide better forecasts than the AR models. We also conduct a practical comparison (for two cointegrated economic time series) of the predictive power of Johansen restricted-VAR (VEC) models with the predictive power of state space models obtained by the CCA subspace algorithm, including a density forecasting analysis

    Non-Parametric Bayesian Methods for Linear System Identification

    Get PDF
    Recent contributions have tackled the linear system identification problem by means of non-parametric Bayesian methods, which are built on largely adopted machine learning techniques, such as Gaussian Process regression and kernel-based regularized regression. Following the Bayesian paradigm, these procedures treat the impulse response of the system to be estimated as the realization of a Gaussian process. Typically, a Gaussian prior accounting for stability and smoothness of the impulse response is postulated, as a function of some parameters (called hyper-parameters in the Bayesian framework). These are generally estimated by maximizing the so-called marginal likelihood, i.e. the likelihood after the impulse response has been marginalized out. Once the hyper-parameters have been fixed in this way, the final estimator is computed as the conditional expected value of the impulse response w.r.t. the posterior distribution, which coincides with the minimum variance estimator. Assuming that the identification data are corrupted by Gaussian noise, the above-mentioned estimator coincides with the solution of a regularized estimation problem, in which the regularization term is the l2 norm of the impulse response, weighted by the inverse of the prior covariance function (a.k.a. kernel in the machine learning literature). Recent works have shown how such Bayesian approaches are able to jointly perform estimation and model selection, thus overcoming one of the main issues affecting parametric identification procedures, that is complexity selection. While keeping the classical system identification methods (e.g. Prediction Error Methods and subspace algorithms) as a benchmark for numerical comparison, this thesis extends and analyzes some key aspects of the above-mentioned Bayesian procedure. In particular, four main topics are considered. 1. PRIOR DESIGN. Adopting Maximum Entropy arguments, a new type of l2 regularization is derived: the aim is to penalize the rank of the block Hankel matrix built with Markov coefficients, thus controlling the complexity of the identified model, measured by its McMillan degree. By accounting for the coupling between different input-output channels, this new prior results particularly suited when dealing for the identification of MIMO systems To speed up the computational requirements of the estimation algorithm, a tailored version of the Scaled Gradient Projection algorithm is designed to optimize the marginal likelihood. 2. CHARACTERIZATION OF UNCERTAINTY. The confidence sets returned by the non-parametric Bayesian identification algorithm are analyzed and compared with those returned by parametric Prediction Error Methods. The comparison is carried out in the impulse response space, by deriving “particle” versions (i.e. Monte-Carlo approximations) of the standard confidence sets. 3. ONLINE ESTIMATION. The application of the non-parametric Bayesian system identification techniques is extended to an online setting, in which new data become available as time goes. Specifically, two key modifications of the original “batch” procedure are proposed in order to meet the real-time requirements. In addition, the identification of time-varying systems is tackled by introducing a forgetting factor in the estimation criterion and by treating it as a hyper-parameter. 4. POST PROCESSING: MODEL REDUCTION. Non-parametric Bayesian identification procedures estimate the unknown system in terms of its impulse response coefficients, thus returning a model with high (possibly infinite) McMillan degree. A tailored procedure is proposed to reduce such model to a lower degree one, which appears more suitable for filtering and control applications. Different criteria for the selection of the order of the reduced model are evaluated and compared
    corecore