197 research outputs found
Rational Pessimism, Rational Exuberance, and Asset Pricing Models
The paper estimates and examines the empirical plausibiltiy of asset pricing models that attempt to explain features of financial markets such as the size of the equity premium and the volatility of the stock market. In one model, the long run risks model of Bansal and Yaron (2004), low frequency movements and time varying uncertainty in aggregate consumption growth are the key channels for understanding asset prices. In another, as typified by Campbell and Cochrane (1999), habit formation, which generates time-varying risk-aversion and consequently time-variation in risk-premia, is the key channel. These models are fitted to data using simulation estimators. Both models are found to fit the data equally well at conventional significance levels, and they can track quite closely a new measure of realized annual volatility. Further scrutiny using a rich array of diagnostics suggests that the long run risk model is preferred.
A New Class of Stochastic Volatility Models with Jumps: Theory and Estimation
The purpose of this paper is to propose a new class of jump diffusions which feature both stochastic volatility and random intensity jumps. Previous studies have focussed primarily on pure jump processes with constant intensity and log-normal jumps or constant jump intensity combined with a one factor stochastic volatility model. We introduce several generalizations which can better accommodate several empirical features of returns data. In their most general form we introduce a class of processes which nests jump-diffusions previously considered in empirical work and includes the affine class of random intensity models studied by Bates (1998) and Duffie, Pan and Singleton (1998) but also allows for non-affine random intensity jump components. We attain the generality of our specification through a generic Lévy process characterization of the jump component. The processes we introduce share the desirable feature with the affine class that they yield analytically tractable and explicit option pricing formula. The non-affine class of processes we study include specifications where the random intensity jump component depends on the size of the previous jump which represent an alternative to affine random intensity jump processes which feature correlation between the stochastic volatility and jump component. We also allow for and experiment with different empirical specifications of the jump size distributions. We use two types of data sets. One involves the S&P500 and the other comprises of 100 years of daily Dow Jones index. The former is a return series often used in the literature and allows us to compare our results with previous studies. The latter has the advantage to provide a long time series and enhances the possibility of estimating the jump component more precisely. The non-affine random intensity jump processes are more parsimonious than the affine class and appear to fit the data much better.
Nous présentons une nouvelle classe de processus à sauts avec volatilité stochastique. Cette nouvelle classe généralise les modèles affinés proposés par Duffie, Pan et Singleton (1998). La généralité se manifeste par une représentation générique des sauts par un processus de Lévy. La classe des processus que nous présentons nous fournit également des prix d'options. Une application empirique démontre la présence de sauts dans des séries financières telles le S&P500 et le Dow Jones. De plus, les processus n'ont pas une intensité constante. Nous analysons plusieurs spécifications empiriques.Efficient method of moments, Poisson processes, jump processes, stochastic volatility models, filtering, Processus à sauts, mesures de Lévy, modèles à volatilité stochastique
On the Bias in Flexible Functional Forms and an Essentially Unbiased Form: The Fourier Flexible Form
The Fourier flexible form and its derived expenditure system are introduced. Subject to smoothness conditions on the consumer's true indirect utility function, the consumer's true expenditure system must be of the Fourier form over the region of interest in an empirical investigation. Arbitrarily accurate finite parameter approximations of the consumer's true expenditure system are obtained by dropping all high-order terms of the Fourier expenditure system past an appropriate truncation point. The resulting finite parametersystem is tractable in empirical studies. The reader who is primarily interested in applications need only read the second and fifth sections. The remainder of the article is concerned with the verification of these claims and an investigation of some aspects of the bias in Translog specifications
Does Smooth Ambiguity Matter for Asset Pricing?
We use the Bayesian method introduced by Gallant and McCulloch (2009) to estimate consumption-based asset pricing models featuring smooth ambiguity preferences. We rely on semi-nonparametric estimation of a flexible auxiliary model in our structural estimation. Based on the market and aggregate consumption data, our estimation provides statistical support for asset pricing models with smooth ambiguity. Statistical model comparison shows that models with ambiguity, learning and time-varying volatility are preferred to the long-run risk model. We analyze asset pricing implications of the estimated models.</jats:p
Alternative Models for Stock Price Dynamics
This paper evaluates the role of various volatility specifications, such as multiple stochastic volatility (SV) factors and jump components, in appropriate modeling of equity return distributions. We use estimation technology that facilitates non-nested model comparisons and use a long data set which provides rich information about the conditional and unconditional distribution of returns. We consider two broad families of models: (1) the multifactor loglinear family, and (2) the affine-jump family. Both classes of models have attracted much attention in the derivatives and econometrics literatures. There are various trade-offs in considering such diverse specifications. If pure diffusion SV models are chosen over jump diffusions, it has important implications for hedging strategies. If logaritmic models are chosen over affine ones, it may seriously complicate option pricing. Comparing many different specifications of pure diffusion multi-factor models and jump diffusion models, we find that (1) log linear models have to be extented to 2 factors with feedback in the mean reverting factor, (2) affine models have to have a jumps in returns, stochastic volatility and probably both. Models (1) and (2) are observationally equivalent on the data set in hand. In either (1) or (2) the key is that the volatility can move violently. As we obtain models with comparable empirical fit, one must make a choice based on arguments other than statistical goodness of fit criteria. The considerations include facility to price options, to hedge and parsimony. The affine specification with jumps in volatility might therefore be preferred because of the closed-form derivatives prices. Nous examinons un ensemble de diffusions avec volatilité stochastique et de sauts afin de modéliser la distribution des rendements d'actifs boursiers. Puisque certains modèles sont non-emboîtés, nous utilisons la méthode EMM afin d'étudier et de comparer le comportement des différents modèles.Efficient method of moments, Poisson jump processes, stochastic volatility models, Processus de diffusions, processus Poisson, volatilité stochastique
Neural network representation and learning of mappings and their derivatives
Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed
Recommended from our members
The dynamic spillovers of entry: an application to the generic drug industry
This paper examines if experience from entry in one market can potentially enhance profitability at a future market opportunity for a related product. We formulate and estimate a dynamic game of entry in which forward looking firms make decisions not just based on present benefits of past entry but also anticipating potential future benefits of current entry. Dynamic spillovers of entry are incorporated through a firm specific unobservable (to the researcher) cost that depends on past entry decisions. The unobserved costs may also be serially persistent. Thus, the model allows for firm specific unobserved heterogeneity that evolves based on firm actions. The challenge of estimating a dynamic game with serially correlated unobserved state variables subject to endogenous feedback is overcome by embedding a particle filter based technique in a Nested Fixed Point Algorithm. Using an application to a stylized model of entry in the generic pharmaceutical industry we underscore the motivation for the model specification and the methodology developed. Our estimates imply positive spillover effects of entry. Moreover, these spillovers suggest heterogeneity not just across firms but also within firms over time based on their history of entry decisions. Our results illustrate that entry may potentially provide firms with additional strategic advantage in later markets, and that entry spillovers may be an important factor to consider in the equilibrium evolution of the generic drug industry.H. Hong acknowledges financial support from the National Science Foundation [Grant SES1459975] and the Stanford Institute for Economic Policy Researc
Tapping the Supercomputer Under Your Desk: Solving Dynamic Equilibrium Models with Graphics Processors
This paper shows how to build algorithms that use graphics processing units (GPUs) installed in most modern computers to solve dynamic equilibrium models in economics. In particular, we rely on the compute unified device architecture (CUDA) of NVIDIA GPUs. We illustrate the power of the approach by solving a simple real business cycle model with value function iteration. We document improvements in speed of around 200 times and suggest that even further gains are likely.
- …