56 research outputs found
Understanding Financial Market Volatility
__Abstract__
Volatility has been one of the most active and successful areas of research in time series
econometrics and economic forecasting in recent decades. Loosely speaking, volatility is
defined as the average magnitude of fluctuations observed in some phenomenon over time.
Within the area of economics, this definition narrows to the variability of an unpredictable
random component of a time series variable. Typical examples in finance are returns on
assets, such as individual stocks or a stock index like the S&P 500 index. As indicated by
the quote from Campbell et al. (1997), (financial market) volatility is central to financial
economics. Since it is the most common measure of the risk involved in investments
in traded securities, it plays a crucial role in portfolio management, risk management,
and pricing of derivative securities including options and futures contracts. Volatility is
therefore closely tracked by private investors, institutional investors like pension funds, central bankers and policy makers. For example, the so-called Basel accords contain
regulations where banks are required to hold a certain amount of capital to cover the risks
involved in their consumer loans, mortgages and other assets. An estimate of the volatility
of these assets is a crucial input for determining these capital requirements. In addition,
the financial crisis in 2007-2008 has proven that the impact of financial market volatility
is not only limited to the financial industry. It shows that volatility may be costly for
the economy as a whole. For example, extreme stock market volatility may negatively
influence aggregate investments behavior, in particular as companies often require equity
as a source of external financing.
This thesis contributes to the volatility literature by investigating several relevant aspects
of volatility. First, we focus on the parameter estimation of multivariate volatility
models, which is problematic if the number of considered assets increases. Second, we
consider the question what exactly causes financial market volatility? In this context,
we relate volatility with various types of information. In addition, we pay attention to
modeling volatility, by adapting volatility models such that they allow for including possible
exogenous variables. Finally, we turn to forecasting techniques of volatility, with the
focus on the combination of density forecasts
Fractional integration and fat tails for realized covariance kernels
We introduce a new fractionally integrated model for covariance matrix dynamics based on the long-memory behavior of daily realized covariance matrix kernels. We account for fat tails in the data by an appropriate distributional assumption. The covariance matrix dynamics are formulated as a numerically efficient matrix recursion that ensures positive definiteness under simple parameter constraints. Using intraday stock data over the period 2001-2012, we construct realized covariance kernels and show that the new fractionally integrated model statistically and economically outperforms recent alternatives such as the multivariate HEAVY model and the multivariate HAR model. In addition, the long-memory behavior is more important during non-crisis periods
On the Effects of Private Information on Volatility
We study the impact of private information on volatility in financial markets. We develop a comprehensive framework to investigate this link while controlling for the effects of both public information (such as macroeconomic news releases) and private information on prices and the effects of public information on volatility. Using a high-frequency 30-year U.S. Treasury bond futures data set, we find that private information variables, such as order flow and bid-ask spread, are statistically and economically significant explanatory variables for volatility. Private information is more important than public information, with the effect of a shock to order flow on volatility being four times larger than the effect of a surprise in the most influential macroeconomic news announcement. Moreover, we document an interaction between public and private information effects on volatility, with the impact of order flow on volatility depending positively on the dispersion of analysts' expectations about macroeconomic announcements. Finally, we find that the effect of private information on volatility is larger during contractions than during expansions
A Class of Adaptive EM-based Importance Sampling Algorithms for Efficient and Robust Posterior and Predictive Simulation
A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of Student-t densities that approximates accurately the target distribution -typically a posterior distribution, of which we only require a kernel - in the sense that the Kullback-Leibler divergence between target and mixture is minimized. We label this approach Mixture of t by Importance Sampling and Expectation Maximization (MitISEM). We also introduce three extensions of the basic MitISEM approach. First, we propose a method for applying MitISEM in a sequential manner, so that the candidate distribution for posterior simulation is cleverly updated when new data become available. Our results show that the computational effort reduces enormously. This sequential approach can be combined with a tempering approach, which facilitates the simulation from densities with multiple modes that are far apart. Second, we introduce a permutation-augmented MitISEM approach, for importance sampling from posterior distributions in mixture models without the requirement of imposing identification restrictions on the model's mixture regimes' parameters. Third, we propose a partial MitISEM approach, which aims at approximating the marginal and conditional posterior distributions of subsets of model parameters, rather than the joint. This division can substantially reduce the dimension of the approximation problem
Improving Density Forecasts and Value-at- Risk Estimates by Combining Densities
__Abstract__
We investigate the added value of combining density forecasts for asset return prediction in a specific region of support. We develop a new technique that takes into account model uncertainty by assigning weights to individual predictive densities using a scoring rule based on the censored likelihood. We apply this approach in the context of recently developed univariate volatility models (including HEAVY and Realized GARCH models), using daily returns from the S&P 500, DJIA, FTSE and Nikkei stock market indexes from 2000 until 2013. The results show that combined density forecasts based on the censored likelihood scoring rule significantly outperform pooling based on the log scoring rule and individual density forecasts. The same result, albeit less strong, holds when compared to combined density forecasts based on equal weights. In addition, VaR estimates improve a t the short horizon, in particular when compared to estimates based on equal weights or to the VaR estimates of the individual models
Predicting Covariance Matrices with Financial Conditions Indexes
We model the impact of financial conditions on asset market volatility and correlation. We propose extensions of (factor-)GARCH models for volatility and DCC models for correlation that allow for including indexes that measure financial conditions. In our empirical application we consider daily stock returns of US deposit banks during the period 1994-2011, and proxy financial conditions by the Bloomberg Financial Conditions Index (FCI) which comprises the money, bond, and equity markets. We find that worse financial conditions are associated with both higher volatility and higher average correlations between stock returns. Especially during crises the additional impact of the FCI indicator is considerable, with an increase in correlations by 0.15. Moreover, including the FCI in volatility and correlation modeling improves Value-at-Risk forecasts, particularly at short horizons
The R package MitISEM: Efficient and Robust Simulation Procedures for Bayesian Inference
This paper presents the R-package MitISEM (mixture of t by importance sampling weighted expectation maximization) which provides an automatic and flexible two-stage method to approximate a non-elliptical target density kernel -- typically a posterior density kernel -- using an adaptive mixture of Student-t densities as approximating density. In the first stage a mixture of Student-t densities is fitted to the target using an expectation maximization (EM) algorithm where each step of the optimization procedure is weighted using importance sampling. In the second stage this mixture density is a candidate density for efficient and robust application of importance sampling or the Metropolis-Hastings (MH) method to estimate properties of the target distribution. The package enables Bayesian inference and prediction on model parameters and probabilities, in particular, for models where densities have multi-modal or other non-elliptical shapes like curved ridges. These shapes occur in research topics in several scientific fields. For instance, analysis of DNA data in bio-informatics, obtaining loans in the banking sector by heterogeneous groups in financial economics and analysis of education's effect on earned income in labor economics. The package MitISEM provides also an extended algorithm, 'sequential MitISEM', which substantially decreases computation time when the target density has to be approximated for increasing data samples
Forecasting Value-at-Risk under temporal and portfolio aggregation
We examine the impact of temporal and portfolio aggregation on the quality of Value-at-Risk (VaR) forecasts over a horizon of 10 trading days for a well-diversified portfolio of stocks, bonds and alternative investments. The VaR forecasts are constructed based on daily, weekly, or biweekly returns of all constituent assets separately, gathered into portfolios based on asset class, or into a single portfolio. We compare the impact of aggregation with that of choosing a model for the conditional volatilities and correlations, the distribution for the innovations, and the method of forecast construction. We find that the level of temporal aggregation is most important. Daily returns form the best basis for VaR forecasts. Modeling the portfolio at the asset or asset class level works better than complete portfolio aggregation, but differences are smaller. The differences from the model, distribution, and forecast choices are also smaller compared with temporal aggregation
- …