86 research outputs found
The Tracy--Widom limit for the largest eigenvalues of singular complex Wishart matrices
This paper extends the work of El Karoui [Ann. Probab. 35 (2007) 663--714]
which finds the Tracy--Widom limit for the largest eigenvalue of a nonsingular
-dimensional complex Wishart matrix to the case
of several of the largest eigenvalues of the possibly singular matrix
As a byproduct, we extend all results of Baik,
Ben Arous and Peche [Ann. Probab. 33 (2005) 1643--1697] to the singular Wishart
matrix case. We apply our findings to obtain a 95% confidence set for the
number of common risk factors in excess stock returns.Comment: Published in at http://dx.doi.org/10.1214/07-AAP454 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Minimax Analysis of Monetary Policy Under Model Uncertainty
Recently there have been several studies that examined monetary policy under model uncertainty. These studies formulated uncertainty in a number of different ways. One of the prominent ways to formulate model uncertainty is to form a non-parametric set of perturbations around some nominal model where the set is structured so that the uncertainty is focused on potentially important weaknesses of the model. Unfortunately, previous efforts were unable to compute exact optimal policy rules under this general formulation of uncertainty. Moreover, for those special cases when the robust rules were computed, the degree of their aggressiveness was often counterintuitive in light of conventional Brainard/Bayesian wisdom that policy under uncertainty should be conservative. This paper,therefore, consists of three different exercises concerning minimax analysis of policy rules under model uncertainty. First, the minimax approach is compared with the Bayesian one in a stylized Brainard (1967) setting. Strong similarities between recommendations of the two approaches are found. Next, a more realistic setting such as in Onatski and Stock (1999) is considered. A characterization of the worst possible models corresponding to the max part of the minimax scheme is given. It is shown that the worst possible models for very aggressive rules, such as the H-infinity rule, have realistic economic structure whereas those for passive rules, such as the actual Fed's policy, are not plausible. Thus, the results of minimax analysis presented in Onatski and Stock (1999) might be biased against the passive rules. Finally, exact optimal minimax policy rules for the case of slowly time-varying uncertainty in the case of the Rudebusch and Svensson's (1998) model are computed. The optimal rule under certainty turns out to be robust to moderate deviations from Rudebusch and Svensson's model.
Dynamics of Interest Rate Curve by Functional Auto-regression
The paper applies methods of functional data analysis ā functional auto-regression, principal components and canonical correlations ā to the study of the dynamics of interest rate curve. In addition, it introduces a novel statistical tool based on the singular value decomposition of the functional cross-covariance operator. This tool is better suited for prediction purposes as opposed to either principal components or canonical correlations. Based on this tool, the paper provides a consistent method for estimating the functional auto-regression of interest rate curve. The theory is applied to estimating dynamics of Eurodollar futures rates. The results suggest that future movements of interest rates are predictable only at very short and very long horizonsFunctional auto-regression, term structure dynamics, principal components, canonical correlations, singular value decomposition
Recommended from our members
Determining the number of factors from empirical distribution of eigenvalues
We develop a new consistent and simple to compute estimator of the number of factors in the approximate factor models of Chamberlain and Rothchild (1983). Our setting requires both time series and cross-sectional dimensions of the data to be large. The main theoretical advantage of our estimator relative to the previously proposed ones is that it works well even in the situation when the portion of the observed variance attributed to the factors is small relative to the variance due to the idiosyncratic term. This advantage arises because the estimator is based on a Law-of-Large- Numbers type regularity for the idiosyncratic components of the data, as opposed to the estimators based on the assumption that a significant portion of the variance is explained by the systematic part. Extensive Monte Carlo analysis shows that our estimator outperforms the recently proposed Bai and Ng (2002) estimators in finite samples when the "signal-to-noise" ratio is relatively small. We apply the new estimation procedure to determine the number of pervasive factors driving stock returns for the companies traded on NYSE, AMEX, and NASDAQ in the period from 1983 to 2003. Our estimate is equal to 8
Dynamics of Interest Rate Curve by Functional Auto-Regression
The paper uses functional auto-regression to predict the dynamics of interest rate curve. It estimates the auto-regressive operator by extending methods of the reduced-rank auto-regression to the functional data. Such an estimation technique is better suited for prediction purposes as opposed to the methods based either on principal components or canonical correlations. The consistency of the estimator is proved using methods of operator theory. The estimation method is used to analyze dynamics of Eurodollar futures rates. The results suggest that future movements of interest rates are predictable at 1-year horizons.functional data analysis, term structure, principal components, canonical correlations, singular value decomposition
"Set Coverage and Robust Policy"
We show that conĆĀÆdence regions covering the identified set may be preferable to conĆĀÆdence regions covering each of its points in robust control applications.
Modeling model uncertainty
Recently there has been much interest in studying monetary policy under model uncertainty. We develop methods to analyze different sources of uncertainty in one coherent structure useful for policy decisions. We show how to estimate the size of the uncertainty based on time series data, and incorporate this uncertainty in policy optimization. We propose two different approaches to modeling model uncertainty. The first is model error modeling, which imposes additional structure on the errors of an estimated model, and builds a statistical description of the uncertainty around a model. The second is set membership identification, which uses a deterministic approach to find a set of models consistent with data and prior assumptions. The center of this set becomes a benchmark model, and the radius measures model uncertainty. Using both approaches, we compute the robust monetary policy under different model uncertainty specifications in a small model of the US economy. JEL Classification: E52, C32, D81estimation, Model uncertainty, monetary policy
Modeling Model Uncertainty
Recently there has been a great deal of interest in studying monetary policy under model uncertainty. We point out that different assumptions about the uncertainty may result in drastically different robust' policy recommendations. Therefore, we develop new methods to analyze uncertainty about the parameters of a model, the lag specification, the serial correlation of shocks, and the effects of real time data in one coherent structure. We consider both parametric and nonparametric specifications of this structure and use them to estimate the uncertainty in a small model of the US economy. We then use our estimates to compute robust Bayesian and minimax monetary policy rules, which are designed to perform well in the face of uncertainty. Our results suggest that the aggressiveness recently found in robust policy rules is likely to be caused by overemphasizing uncertainty about economic dynamics at low frequencies.
Searching for Prosperity
Quah's [1993a] transition matrix analysis of world income distribution based on annual data suggests an ergodic distribution with twin peaks at the rich and poor end of the distribution. Since the ergodic distribution is a highly non-linear function of the underlying transition matrix estimated extremely noisily. Estimates over the foreseeable future are more precise. The Markovian assumptions underlying the analysis are much better satisfied with an analysis based on five-year transitions than one-year transitions. Such an analysis yields an ergodic distribution with 72% of mass in the top income category, but a prolonged transition, during which some inequality measures increase. The rosy ergodic forecast and prolonged transition arise because countries' relative incomes move both up and down at moderate levels, but once countries reach the highest income category, they rarely leave it. This is consistent with a model in which countries search among policies until they reach an income level at which further experimentation is too costly. If countries can learn from each other's experience, the future may be much brighter than would be predicted based on projecting forward the historical transition matrix.
- ā¦