1,038 research outputs found

    Some Results on the Identification and Estimation of Vector ARMAX Processes

    Get PDF
    This paper addresses the problem of identifying echelon canonical forms for a vector autoregressive moving average model with exogenous variables using finite algorithms. For given values of the Kronecker indices a method for estimating the structural parameters of a model using ordinary least squares calculations is presented. These procedures give rise, rather naturally, to a technique for the determination of the structural indices based on the use of conventional model selection criteria. A detailed analysis of the statistical properties of the estimation and identification procedures is given and some evidence on the practical significance of the results obtained is also provided. Modifications designed to improve the performance of the methods are presented. Some discussion of the practical significance of the results obtained is also provided.ARMAX model, consistency, echelon canonical form, efficiency, estimation, identification, Kronecker invariants, least squares, selection criterion, structure determination, subspace algorithm.

    Vector Autoregresive Moving Average Identification for Macroeconomic Modeling: Algorithms and Theory

    Get PDF
    This paper develops a new methodology for identifying the structure of VARMA time series models. The analysis proceeds by examining the echelon canonical form and presents a fully automatic data driven approach to model specification using a new technique to determine the Kronecker invariants. A novel feature of the inferential procedures developed here is that they work in terms of a canonical scalar ARMAX representation in which the exogenous regressors are given by predetermined contemporaneous and lagged values of other variables in the VARMA system. This feature facilitates the construction of algorithms which, from the perspective of macroeconomic modeling, are efficacious in that they do not use AR approximations at any stage. Algorithms that are applicable to both asymptotically stationary and unit-root, partially nonstationary (cointegrated) time series models are presented. A sequence of lemmas and theorems show that the algorithms are based on calculations that yield strongly consistent estimates.Keywords: Algorithms, asymptotically stationary and cointegrated time series, echelon

    Properties of the Sieve Bootstrap for Fractionally Integrated and Non-Invertible Processes

    Get PDF
    In this paper we will investigate the consequences of applying the sieve bootstrap under regularity conditions that are sufficiently general to encompass both fractionally integrated and non-invertible processes. The sieve bootstrap is obtained by approximating the data generating process by an autoregression whose order h increases with the sample size T. The sieve bootstrap may be particularly useful in the analysis of fractionally integrated processes since the statistics of interest can often be non-pivotal with distributions that depend on the fractional index d. The validity of the sieve bootstrap is established and it is shown that when the sieve bootstrap is used to approximate the distribution of a general class of statistics admitting an Edgeworth expansion then the error rate achieved is of order O (T  β+d-1 ), for any β > 0. Practical implementation of the sieve bootstrap is considered and the results are illustrated using a canonical example.Autoregressive approximation, fractional process, non-invertibility, rate of convergence, sieve bootstrap.

    On The Identification and Estimation of Partially Nonstationary ARMAX Systems

    Get PDF
    This paper extends current theory on the identification and estimation of vector time series models to nonstationary processes. It examines the structure of dynamic simultaneous equations systems or ARMAX processes that start from a given set of initial conditions and evolve over a given, possibly infinite, future time horizon. The analysis proceeds by deriving the echelon canonical form for such processes. The results are obtained by amalgamating ideas from the theory of stochastic difference equations with adaptations of the Kronecker index theory of dynamic systems. An extension of these results to the analysis of unit-root, partially nonstationary (cointegrated) time series models is also presented, leading to straightforward identification conditions for the error correction, echelon canonical form. An innovations algorithm for the evaluation of the exact Gaussian likelihood is given and the asymptotic properties of the approximate Gaussian estimator and the exact maximum likelihood estimator based upon the algorithm are derived. Examples illustrating the theory are discussed and some experimental evidence is also presented.ARMAX, partially nonstationary, Kronecker index theory identification.

    Estimating Components in Finite Mixtures and Hidden Markov Models

    Get PDF
    When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests estimating the number of states of the unobservable Markov chain by determining the number of mixture components in the marginal distribution. We therefore present new methods for estimating the number of states in a hidden Markov model, and coincidentally the unknown number of components in a finite mixture, based on penalized quasi-likelihood and generalized quasi-likelihood ratio methods constructed from the marginal distribution. The procedures advocated are simple to calculate and results obtained in empirical applications indicate that they are as effective as current available methods based on the full likelihood. We show that, under fairly general regularity conditions, the methods proposed will generate strongly consistent estimates of the unknown number of states or components.Finite mixture, hidden Markov process, model selection, number of states, penalized quasi-likelihood, generalized quasi-likelihood ratio, strong consistency.

    Assessing Instrumental Variable Relevance:An Alternative Measure and Some Exact Finite Sample Theory

    Get PDF
    focus on the ability of the instrument set to predict a single endogenous regressor, even if there is more than one endogenous regressor in the equation of interest. We propose new measures of instrument relevance in the presence of multiple endogenous regressors, taking both univariate and multivariate perspectives, and develop the accompanying exact finite sample distribution theory in each case. In passing, the paper also explores relationships that exist between the measures proposed here and other statistics that have been proposed elsewhere in the literature. These explorations highlight the close connection between notions of instrument relevance, identification and specification testing in simultaneous equations models.Instrumental variables, weak instruments, relevance, alienation, Wilks’ Lambda.

    Dual P-Values, Evidential Tension and Balanced Tests

    Get PDF
    In the classical approach to statistical hypothesis testing the role of the null hypothesis H0 and the alternative H1 is very asymmetric. Power, calculated from the distribution of the test statistic under H1, is treated as a theoretical construct that can be used to guide the choice of an appropriate test statistic or sample size, but power calculations do not explicitly enter the testing process in practice. In a significance test a decision to accept or reject H0 is driven solely by an examination of the strength of evidence against H0, summarized in the P-value calculated from the distribution of the test statistic under H0. A small P-value is taken to represent strong evidence against H0, but it need not necessarily indicate strong evidence in favour of H1. More recently, Moerkerke et al. (2006) have suggested that the special status of H0 is often unwarranted or inappropriate, and argue that evidence against H1 can be equally meaningful. They propose a balanced treatment of both H0 and H1 in which the classical P-value is supplemented by the P-value derived under H1. The alternative P-value is the dual of the null P-value and summarizes the evidence against a target alternative. Here we review how the dual P-values are used to assess the evidential tension between H0 and H1, and use decision theoretic arguments to explore a balanced hypothesis testing technique that exploits this evidential tension. The operational characteristics of balanced hypothesis tests is outlined and their relationship to conventional notions of optimal tests is laid bare. The use of balanced hypothesis tests as a conceptual tool is illustrated via model selection in linear regression and their practical implementation is demonstrated by application to the detection of cancer-specific protein markers in mass spectroscopy.Balanced test, P-value, dual P-values, evidential tension, null hypothesis, alternative hypothesis, operating characteristics, false detection rate

    Small Concentration Asymptotics and Instrumental Variables Inference

    Get PDF
    Poskitt and Skeels (2005) provide a new approximation to the sampling distribution of the IV estimator in a simultaneous equations model, the approximation is appropriate when the concentration parameter associated with the reduced form model is small. We present approximations to the sampling distributions of various functions of the IV estimator based upon small-concentration asymptotics, and investigate hypothesis testing procedures and confidence region construction using these approximations. We explore the relationship between our work and the K statistic of Kleibergen (2002) and demonstrate that our results can be used to explain the sampling behaviour of the K statistic in simultaneous equations models where identification is weak.simultaneous equations model, IV estimator, weak identification, weak instruments, small-concentration asymptotics

    Description Length and Dimensionality Reduction in Functional Data Analysis

    Get PDF
    In this paper we investigate the use of description length principles to select an appropriate number of basis functions for functional data. We provide a flexible definition of the dimension of a random function that is constructed directly from the Karhunen-Loève expansion of the observed process. Our results show that although the classical, principle component variance decomposition technique will behave in a coherent manner, in general, the dimension chosen by this technique will not be consistent. We describe two description length criteria, and prove that they are consistent and that in low noise settings they will identify the true finite dimension of a signal that is embedded in noise. Two examples, one from mass-spectroscopy and the one from climatology, are used to illustrate our ideas. We also explore the application of different forms of the bootstrap for functional data and use these to demonstrate the workings of our theoretical results.Bootstrap, consistency, dimension determination, Karhunen-Loève expansion, signal-to-noise ratio, variance decomposition
    corecore