5,547 research outputs found

    A solution to the problem of too many instruments in dynamic panel data GMM

    Get PDF
    The well-known problem of too many instruments in dynamic panel data GMM is dealt with in detail in Roodman (2009, Oxford Bull. Econ. Statist.). The present paper goes one step further by providing a solution to this problem: factorisation of the standard instrument set is shown to be a valid transformation for ensuring consistency of GMM. Monte Carlo simulations show that this new estimation technique outperforms other possible transformations by having a lower bias and RMSE as well as greater robustness of overidentifying restrictions. The researcher's choice of a particular transformation can be replaced by a data-driven statistical decision. --Dynamic panel data,generalised method of moments,instrument proliferation,factor analysis

    Findings of the Signal Approach for Financial Monitoring in Kazakhstan

    Get PDF
    This study concentrates on the signal approach for Kazakhstan. It focuses on the properties of individual indicators prior to observed currency crises. The indicators are used to build composite indicators. An advanced approach uses principal components analysis for the construction of composite indicators. Furthermore, the common signal approach is improved by robust statistical methods. The estimation period reaches from 1997 to 2007. It is shown that most of the composite indicators are able to flag the reported crises at an early stage. In a second step it is checked whether the most recent crisis in 2009 is signalled in advance.currency crises, leading economic indicators, signal approach, Kazakhstan

    Accounting for Calibration Uncertainties in X-ray Analysis: Effective Areas in Spectral Fitting

    Full text link
    While considerable advance has been made to account for statistical uncertainties in astronomical analyses, systematic instrumental uncertainties have been generally ignored. This can be crucial to a proper interpretation of analysis results because instrumental calibration uncertainty is a form of systematic uncertainty. Ignoring it can underestimate error bars and introduce bias into the fitted values of model parameters. Accounting for such uncertainties currently requires extensive case-specific simulations if using existing analysis packages. Here we present general statistical methods that incorporate calibration uncertainties into spectral analysis of high-energy data. We first present a method based on multiple imputation that can be applied with any fitting method, but is necessarily approximate. We then describe a more exact Bayesian approach that works in conjunction with a Markov chain Monte Carlo based fitting. We explore methods for improving computational efficiency, and in particular detail a method of summarizing calibration uncertainties with a principal component analysis of samples of plausible calibration files. This method is implemented using recently codified Chandra effective area uncertainties for low-resolution spectral analysis and is verified using both simulated and actual Chandra data. Our procedure for incorporating effective area uncertainty is easily generalized to other types of calibration uncertainties.Comment: 61 pages double spaced, 8 figures, accepted for publication in Ap

    Financial Applications of Random Matrix Theory: a short review

    Get PDF
    We discuss the applications of Random Matrix Theory in the context of financial markets and econometric models, a topic about which a considerable number of papers have been devoted to in the last decade. This mini-review is intended to guide the reader through various theoretical results (the Marcenko-Pastur spectrum and its various generalisations, random SVD, free matrices, largest eigenvalue statistics, etc.) as well as some concrete applications to portfolio optimisation and out-of-sample risk estimation.Comment: To appear in the "Handbook on Random Matrix Theory", Oxford University Pres

    Set-based Tests for Genetic Association and Gene-Environment Interaction.

    Full text link
    In the first project, we propose a new statistical model based on the random field theory, referred to as a genetic random field model (GenRF), for gene/region based association analysis in a cross-sectional study. Using a pseudo-likelihood approach, a GenRF test for the joint association of multiple genetic variants is developed, which has the following advantages: 1. accommodating complex interactions for improved performance; 2. natural dimension reduction; 3. boosting power in the presence of LD; 4. computationally efficient. Simulation studies are conducted under various scenarios. Compared with the sequence kernel association test (SKAT), as well as other more standard methods, GenRF shows overall comparable performance and better performance in the presence of complex interactions. In the second project, we propose a longitudinal genetic random field model (LGRF), to test the association between a phenotype measured repeatedly during the course of an observational study and a set of genetic variants. Generalized score type tests are developed, which we show are robust to misspecification of within-subject correlation. In addition, a joint test incorporating gene-time interaction is further proposed. Computational advancement is made for scalable implementation of the proposed methods in large-scale genome-wide association studies (GWAS). In the third project, we propose a generalized score type test for set-based inference for gene-environment interaction with longitudinally measured quantitative traits. The test is robust to misspecification of within subject correlation structure and has enhanced power compared to existing alternatives. Unlike tests for marginal genetic association, set-based tests for gene-environment interaction face the challenges of a potentially misspecified and high-dimensional main effect model under the null hypothesis. We show that our proposed test is robust to main effect misspecification of environmental exposure and genetic factors under the gene-environment independence condition. When genetic and environmental factors are dependent, the method of sieves is further proposed to eliminate potential bias due to a misspecified main effect of a continuous environmental exposure. A weighted principal component analysis approach is developed to perform dimension reduction when the number of genetic variants in the set is large relative to the sample size.PhDBiostatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133412/1/zihuai_1.pd

    The U.S. Dynamic Taylor Rule With Multiple Breaks, 1984-2001.

    Get PDF
    This paper combines two major strands of literature: structural breaks and Taylor rules. At first, I propose a nonstandard t-test statistic for detecting multiple level and trend breaks of I(0) series by supplying theoretical and limit-distribution critical values obtained from Montecarlo experimentation. Thereafter, I introduce a forward-looking Taylor rule expressed as a dynamic model which allows for multiple breaks and reaction-function coefficients of the leads of inflation, of the output gap and of an equity market index. Sequential GMM estimation of the model, applied to the Effective Federal Funds Rate for the period 1984:01-2001:06, produces three main interesting results: the existence of significant structural breaks, the substantial role played by inflation in the FOMC decisions and a marked equity targeting policy approach. Such results reveal departures from rationality, determined by structured and unstructured uncertainty, which the Fed systematically attempts at reducing by administering inflation scares and misinformation about the actual Phillips curve, in order to keep the output and equity markets under control.Generalized Method of Moments; Monetary Policy Rules; Multiple Breaks
    • …
    corecore