32,862 research outputs found

    Non-uniqueness of deep parameters and shocks in estimated DSGE models: a health warning

    Get PDF
    Estimation of dynamic stochastic general equilibrium (DSGE)models using state space methods implies vector autoregressive moving average (VARMA)representations of the observables. Following Lippi and Reichlin’s (1994)analysis of nonfundamentalness, this note highlights the potential dangers of end of non-uniqueness, both of estimates of deep parameters and of structural innovations

    Consistency properties of a simulation-based estimator for dynamic processes

    Full text link
    This paper considers a simulation-based estimator for a general class of Markovian processes and explores some strong consistency properties of the estimator. The estimation problem is defined over a continuum of invariant distributions indexed by a vector of parameters. A key step in the method of proof is to show the uniform convergence (a.s.) of a family of sample distributions over the domain of parameters. This uniform convergence holds under mild continuity and monotonicity conditions on the dynamic process. The estimator is applied to an asset pricing model with technology adoption. A challenge for this model is to generate the observed high volatility of stock markets along with the much lower volatility of other real economic aggregates.Comment: Published in at http://dx.doi.org/10.1214/09-AAP608 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bounded Influence Approaches to Constrained Mixed Vector Autoregressive Models

    Get PDF
    The proliferation of many clinical studies obtaining multiple biophysical signals from several individuals repeatedly in time is increasingly recognized, a recognition generating growth in statistical models that analyze cross-sectional time series data. In general, these statistical models try to answer two questions: (i) intra-individual dynamics of the response and its relation to some covariates; and, (ii) how this dynamics can be aggregated consistently in a group. In response to the first question, we propose a covariate-adjusted constrained Vector Autoregressive model, a technique similar to the STARMAX model (Stoffer, JASA 81, 762-772), to describe serial dependence of observations. In this way, the number of parameters to be estimated is kept minimal while offering flexibility for the model to explore higher order dependence. In response to (ii), we use mixed effects analysis that accommodates modelling of heterogeneity among cross-sections arising from covariate effects that vary from one cross-section to another. Although estimation of the model can proceed using standard maximum likelihood techniques, we believed it is advantageous to use bounded influence procedures in the modelling (such as choosing constraints) and parameter estimation so that the effects of outliers can be controlled. In particular, we use M-estimation with a redescending bounding function because its influence function is always bounded. Furthermore, assuming consistency, this influence function is useful to obtain the limiting distribution of the estimates. However, this distribution may not necessarily yield accurate inference in the presence of contamination as the actual asymptotic distribution might have wider tails. This led us to investigate bootstrap approximation techniques. A sampling scheme based on IID innovations is modified to accommodate the cross-sectional structure of the data. Then the M-estimation is applied to each bootstrap sample naively to obtain the asymptotic distribution of the estimates.We apply these strategies to the extracted BOLD activation from several regions of the brain from a group of individuals to describe joint dynamic behavior between these locations. We used simulated data with both innovation and additive outliers to test whether the estimation procedure is accurate despite contamination

    International price discovery in the presence of microstructure noise

    Get PDF
    This paper addresses and resolves the issue of microstructure noise when measuring the relative importance of home and U.S. market in the price discovery process of Canadian interlisted stocks. In order to avoid large bounds for information shares, previous studies applying the Cholesky decomposition within the Hasbrouck (1995) framework had to rely on high frequency data. However, due to the considerable amount of microstructure noise inherent in return data at very high frequencies, these estimators are distorted. We offer a modified approach that identifies unique information shares based on distributional assumptions and thereby enables us to control for microstructure noise. Our results indicate that the role of the U.S. market in the price discovery process of Canadian interlisted stocks has been underestimated so far. Moreover, we suggest that rather than stock specific factors, market characteristics determine information shares

    Innovations orthogonalization: a solution to the major pitfalls of EEG/MEG "leakage correction"

    Full text link
    The problem of interest here is the study of brain functional and effective connectivity based on non-invasive EEG-MEG inverse solution time series. These signals generally have low spatial resolution, such that an estimated signal at any one site is an instantaneous linear mixture of the true, actual, unobserved signals across all cortical sites. False connectivity can result from analysis of these low-resolution signals. Recent efforts toward "unmixing" have been developed, under the name of "leakage correction". One recent noteworthy approach is that by Colclough et al (2015 NeuroImage, 117:439-448), which forces the inverse solution signals to have zero cross-correlation at lag zero. One goal is to show that Colclough's method produces false human connectomes under very broad conditions. The second major goal is to develop a new solution, that appropriately "unmixes" the inverse solution signals, based on innovations orthogonalization. The new method first fits a multivariate autoregression to the inverse solution signals, giving the mixed innovations. Second, the mixed innovations are orthogonalized. Third, the mixed and orthogonalized innovations allow the estimation of the "unmixing" matrix, which is then finally used to "unmix" the inverse solution signals. It is shown that under very broad conditions, the new method produces proper human connectomes, even when the signals are not generated by an autoregressive model.Comment: preprint, technical report, under license "Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)", https://creativecommons.org/licenses/by-nc-nd/4.0

    Federal Reserve Policy viewed through a Money Supply Lens

    Get PDF
    This paper examines whether the U.S. Federal Reserve has adjusted high-powered money supply in response to macroeconomic indicators. Applying ex-post and real-time data for the postwar period, we provide evidence that nonborrowed reserves responded to expected inflation and the output-gap. While the output-gap feedback has always been negative, the response of money supply to changes in inflation varies considerably across time. The inflation feedback is negative in the post-1979 period and positive, albeit smaller than one, in the pre-1979 period. Applying a standard macroeconomic model, these roperties are shown to be consistent with a welfare maximizing policy, and to ensure equilibrium determinacy. Viewed through the money supply lens, the Fed has thus never allowed for endogenous fluctuations, which contrasts conclusions drawn from federal funds rate analyses.Nonborrowed reserves, monetary policy reaction functions, real-time data, determinacy

    On the Estimation of Cost of Capital and its Reliability

    Get PDF
    Gordon and Shapiro (1956) first equated the price of a share with the present value of future dividends and derived the well-known relationship. Since then, there have been many improvements on the theory. For example, Thompson (1985, 1987) combined the "dividend yield plus growth" method with Box-Jenkins time series analysis of past dividend experience to estimate the cost of capital and its "reliability" for individual firms. Thompson and Wong (1991, 1996) proved the existence and uniqueness of the cost of capital and provided formula to estimate both the cost of capital and its reliability. However, their approaches cannot be used if the "reliability" does not exist or if there are multiple solutions for the "reliability". In this paper, we extend their theory by proving the existence and uniqueness of this reliability. In addition, we propose the estimators for the reliability and prove that the estimators converge to a true parameter. The estimation approach is further simplified, hence rendering computation easier. In addition, the properties of the cost of capital and its reliability will be analyzed with illustrations of several commonly used Box-Jenkins models.
    • 

    corecore