3,312 research outputs found
The Mysteries of Trend
Trends are ubiquitous in economic discourse, play a role in much economic theory, and have been intensively studied in econometrics over the last three decades. Yet the empirical economist, forecaster, and policy maker have little guidance from theory about the source and nature of trend behavior, even less guidance about practical formulations, and are heavily reliant on a limited class of stochastic trend, deterministic drift, and structural break models to use in applications. A vast econometric literature has emerged but the nature of trend remains elusive. In spite of being the dominant characteristic in much economic data, having a role in policy assessment that is often vital, and attracting intense academic and popular interest that extends well beyond the subject of economics, trends are little understood. This essay discusses some implications of these limitations, mentions some research opportunities, and briefly illustrates the extent of the difficulties in learning about trend phenomena even when the time series are far longer than those that are available in economics.Climate change, Etymology of trend, Paleoclimatology, Policy, Stochastic trend
Bootstrapping I(1) Data
A functional law for an I(1) sample data version of the continuous-path block bootstrap of Paparoditis and Politis (2001) is given. The results provide an alternative demonstration that continuous-path block bootstrap unit root tests are consistent under the null.Asymptotic theory, Block bootstrap, Bootstrap, Brownian motion, Continuous path bootstrap, Embedding, Unit root
A specification test for nonlinear nonstationary models
We provide a limit theory for a general class of kernel smoothed U-statistics
that may be used for specification testing in time series regression with
nonstationary data. The test framework allows for linear and nonlinear models
with endogenous regressors that have autoregressive unit roots or near unit
roots. The limit theory for the specification test depends on the
self-intersection local time of a Gaussian process. A new weak convergence
result is developed for certain partial sums of functions involving
nonstationary time series that converges to the intersection local time
process. This result is of independent interest and is useful in other
applications. Simulations examine the finite sample performance of the test.Comment: Published in at http://dx.doi.org/10.1214/12-AOS975 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Conversation with Eric Ghysels
Published in Econometric Theory, 2012, https://doi.org/10.1017/S026646661100017X</p
Dating the Timeline of Financial Bubbles during the Subprime Crisis
A new recursive regression methodology is introduced to analyze the bubble characteristics of various financial time series during the subprime crisis. The methods modify a technique proposed in Phillips, Wu and Yu (2010) and provide a technology for identifying bubble behavior and consistent dating of their origination and collapse. The tests also serve as an early warning diagnostic of bubble activity. Seven relevant financial series are investigated, including three financial assets (the Nasdaq index, home price index and asset-backed commercial paper), two commodities (the crude oil price and platinum price), one bond rate (Baa), and one exchange rate (Pound/USD). Statistically significant bubble characteristics are found in all of these series. The empirical estimates of the origination and collapse dates suggest an interesting migration mechanism among the financial variables: a bubble first emerged in the equity market during mid-1995 lasting to the end of 2000, followed by a bubble in the real estate market between January 2001 and July 2007 and in the mortgage market between November 2005 and August 2007. After the subprime crisis erupted, the phenomenon migrated selectively into the commodity market and the foreign exchange market, creating bubbles which subsequently burst at the end of 2008, just as the effects on the real economy and economic growth became manifest. Our empirical estimates of the origination and collapse dates match well with the general datetimes of this crisis put forward in a recent study by Caballero, Farhi and Gourinchas (2008).Financial bubbles, Crashes, Date stamping, Explosive behavior, Mildly explosive process, Subprime crisis, Timeline
A Two-Stage Realized Volatility Approach to Estimation of Diffusion Processes with Discrete
This paper motivates and introduces a two-stage method of estimating diffusion processes based on discretely sampled observations. In the first stage we make use of the feasible central limit theory for realized volatility, as developed in Jacod (1994) and Barndorff-Nielsen and Shephard (2002), to provide a regression model for estimating the parameters in the diffusion function. In the second stage the in-fill likelihood function is derived by means of the Girsanov theorem and then used to estimate the parameters in the drift function. Consistency and asymptotic distribution theory for these estimates are established in various contexts. The finite sample performance of the proposed method is compared with that of the approximate maximum likelihood method of At-Sahalia (2002).Maximum likelihood, Girsnov theorem, Discrete sampling, Continuous record, realized volatility
Comment on “Realized Variance and Market Microstructure Noise” by Peter R. Hansen and Asger Lunde
We find ourselves very much in agreement with the thrust of HL’s message concerning the complexity induced by microstructure noise. In particular, we agree that noise is time dependent and correlated with the efficient price - features that in our view are a necessary consequence of the observed form of market transactions, as we have argued above - and that the properties of noise inevitably evolve over time, again just as the efficient price is itself evolutionary. We further agree that microstructure noise cannot be accommodated by simple specifications. Since microstructure noise at ultra high infill sampling frequencies often off-sets the actual transactions data to the latent efficient price, the complexity of microstructure noise includes local nonstationarity and perfect correlation with the efficient price. These are properties that are not permitted in the models and methods presently used in the literature. However, there are empirical procedures that are capable of addressing these additional complexities as we have indicated in parts of our discussion. We join the authors in saying there is still much to do in this exciting field and we look forward to further developments that build on the work they and others have done recently.
Maximum Likelihood and Gaussian Estimation of Continuous Time Models in Finance
This paper overviews maximum likelihood and Gaussian methods of estimating continuous time models used in finance. Since the exact likelihood can be constructed only in special cases, much attention has been devoted to the development of methods designed to approximate the likelihood. These approaches range from crude Euler-type approximations and higher order stochastic Taylor series expansions to more complex polynomial-based expansions and infill approximations to the likelihood based on a continuous time data record. The methods are discussed, their properties are outlined and their relative finite sample performance compared in a simulation experiment with the nonlinear CIR diffusion model, which is popular in empirical finance. Bias correction methods are also considered and particular attention is given to jackknife and indirect inference estimators. The latter retains the good asymptotic properties of ML estimation while removing finite sample bias. This method demonstrates superior performance in finite samples.Maximum likelihood, Transition density, Discrete sampling, Continuous record, realized volatility, Bias Reduction, Jackknife, Indirect Inference
Refined Inference on Long Memory in Realized Volatility
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et. al. (2001), Martens et al. (2004)). The present paper provides some analytical explanations for this evidence and shows how recent results in Lieberman and Phillips (2004a, 2004b) can be used to refine statistical inference about d with little computational effort. In contrast to standard asymptotic normal theory now used in the literature which has an O(n-1/2) error rate on error rejection probabilities, the asymptotic approximation used here has an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and highly user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et. al. (2001) and Martens et. al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory.ARFIMA; Edgeworth expansion; Fourier integral expansion; Fractional differencing; Improved inference; Long memory; Pivotal statistic; Realized volatility; Singularity
Comments on “A selective overview of nonparametric methods in financial econometricsâ€Â
In recent years there has been increased interest in using nonparametric methods to deal with various aspects of financial data. The paper by Fan overviews some nonparametric techniques that have been used in the financial econometric literature, focusing on estimation and inference for diffusion models in continuous time and estimation of state price and transition density functions. Our comments on Fans paper will concentrate on two issues that relate in important ways to the papers focus on misspecification and discretization bias and the role of nonparametric methods in empirical finance. The first issue deals with the finite sample effects of various estimation methods and their implications for asset pricing. A good deal of recent attention in the econometric literature has focused on the benefits of full maximum likelihood (ML) estimation of diffusions and mechanisms for avoiding discretization bias in the construction of the likelihood. However, many of the problems of estimating dynamic models that are well known in discrete time series, such as the bias in ML estimation, also manifest in the estimation of continuous time systems and affect subsequent use of these estimates, for instance in derivative pricing. In consequence, a relevant concern is the relative importance of the estimation and discretization biases. As we will show below, the former often dominates the latter even when the sample size is large (at least 500 monthly observations, say). Moreover, it turns out that correction for the finite sample estimation bias continues to be more important when the diffusion component of the model is itself misspecified. Such corrections appear to be particularly important in models that are nonstationary or nearly nonstationary.nonparametric methods, financial data, Fan, empirical finance, discretization bias, misspecification, ML estimation
- …
