148 research outputs found

    Accurate Yield Curve Scenarios Generation using Functional Gradient Descent

    Get PDF
    We propose a multivariate nonparametric technique for generating reliable historical yield curve scenarios and confidence intervals. The approach is based on a Functional Gradient Descent (FGD) estimation of the conditional mean vector and volatility matrix of a multivariate interest rate series. It is computationally feasible in large dimensions and it can account for non-linearities in the dependence of interest rates at all available maturities. Based on FGD we apply filtered historical simulation to compute reliable out-of-sample yield curve scenarios and confidence intervals. We back-test our methodology on daily USD bond data for forecasting horizons from 1 to 10 days. Based on several statistical performance measures we find significant evidence of a higher predictive power of our method when compared to scenarios generating techniques based on (i) factor analysis, (ii) a multivariate CCC-GARCH model, or (iii) an exponential smoothing volatility estimators as in the RiskMetrics approachConditional mean and volatility estimation; Filtered Historical Simulation; Functional Gradient Descent; Term structure; Multivariate CCC-GARCH models

    A general multivariate threshold GARCH model with dynamic conditional correlations

    Get PDF
    We propose a new multivariate GARCH model with Dynamic Conditional Correlations that extends previous models by admitting multivariate thresholds in conditional volatilities and correlations. The model estimation is feasible in large dimensions and the positive deniteness of the conditional covariance matrix is easily ensured by the structure of the model. Thresholds in conditional volatilities and correlations are estimated from the data, together with all other model parameters. We study the performance of our model in three distinct applications to US stock and bond market data. Even if the conditional volatility functions of stock returns exhibit pronounced GARCH and threshold features, their conditional correlation dynamics depends on a very simple threshold structure with no local GARCH features. We obtain a similar result for the conditional correlations between government and corporate bond returns. On the contrary, we ¯nd both threshold and GARCH structures in the conditional correlations between stock and government bond returns. In all applications, our model improves signi¯cantly the in-sample and out-of-sample forecasting power for future conditional correlations with respect to other relevant multivariate GARCH models.Multivariate GARCH models, Dynamic conditional correlations, Tree-structured GARCH models

    Optimal Conditionally Unbiased Bounded-Influence Inference in Dynamic Location and Scale Models

    Get PDF
    This paper studies the local robustness of estimators and tests for the conditional location and scale parameters in a strictly stationary time series model. We first derive optimal bounded-influence estimators for such settings under a conditionally Gaussian reference model. Based on these results, optimal bounded-influence versions of the classical likelihood-based tests for parametric hypotheses are obtained. We propose a feasible and efficient algorithm for the computation of our robust estimators, which makes use of analytical Laplace approximations to estimate the auxiliary recentering vectors ensuring Fisher consistency in robust estimation. This strongly reduces the necessary computation time by avoiding the simulation of multidimensional integrals, a task that has typically to be addressed in the robust estimation of nonlinear models for time series. In some Monte Carlo simulations of an AR(1)-ARCH(1) process we show that our robust procedures maintain a very high efficiency under ideal model conditions and at the same time perform very satisfactorily under several forms of departure from conditional normality. On the contrary, classical Pseudo Maximum Likelihood inference procedures are found to be highly inefficient under such local model misspecifications. These patterns are confirmed by an application to robust testing for ARCH.Time series models, M-estimators, influence function, robust estimation and testing

    Essays in asset pricing

    Get PDF
    My dissertation consists of three chapters, each of which focuses on a different area of research in asset pricing. The first chapter's focal point is the measurement of the premium for jump risks in index option markets. The second chapter is devoted to non- parametric measurement of pricing kernel dispersion. The third chapter contributes to the literature on latent state variable recovery in option pricing models. In the first chapter, "Big risk", I show how to replicate a large family of high-frequency measures of realised return variation using dynamically rebalanced option portfolios. With this technology investors can generate optimal hedging payoffs for realised variance and several measures of realised jump variation in incomplete option markets. These trading strategies induce excess payoffs that are direct compensation for second- and higher order risk exposure in the market for (index) options. Sample averages of these excess payoffs are natural estimates of risk premia associated with second- and higher order risk exposures. In an application to the market for short-maturity European options on the S&P500 index, I obtain new important evidence about the pricing of variance and jump risk. I find that the variance risk premium is positive during daytime, when the hedging frequency is high enough, and negative during night-time. Similarly, for an investor taking long variance positions, daytime profits are grater in absolute value than night-time losses. Compensation for big risk is mostly available overnight. The premium for jump skewness risk is positive, while the premium for jump quarticity is negative (contrary to variance, also during the trading day). The risk premium for big risk is concentrated in states with large recent big risk realisations. In the second chapter, "Arbitrage free dispersion", co-authored with Andras Sali and Fabio Trojani, we develop a theory of arbitrage-free dispersion (AFD) which allows for direct insights into the dependence structure of the pricing kernel and stock returns, and which characterizes the testable restrictions of asset pricing models. Arbitrage-free dispersion arises as a consequence of Jensen's inequality and the convexity of the cumulant generating function of the pricing kernel and returns. It implies a wide family of model-free dispersion constraints, which extend the existing literature on dispersion and co-dispersion bounds. The new techniques are applicable within a unifying approach in multivariate and multiperiod settings. In an empirical application, we find that the dispersion of stationary and martingale pricing kernel components in a benchmark long-run risk model yields a counterfactual dependence of short- vs. long- maturity bond returns and is insufficient for pricing optimal portfolios of market equity and short-term bonds. In the third chapter, "State recovery from option data through variation swap rates in the presence of unspanned skewness", I show that a certain class of variance and skew swaps can be thought of as sufficient statistics of the implied volatility surface in the context of uncovering the conditional dynamics of second and third moments of index returns. I interpret the slope of the Cumulant Generating Function of index returns in the context of tradable swap contracts, which nest the standard variance swap, and share its fundamental linear pricing property in the class of Affine Jump Diffusion models. Equipped with variance- and skew-pricing contracts, I investigate the performance of a range of state variable filtering setups in the context of the stylized facts uncovered by the recent empirical option pricing literature, which underlines the importance of decoupling the drivers of stochastic volatility from those of stochastic (jump) skewness. The linear pricing structure of the contracts allows for an exact evaluation of the impact of state variables on the observed prices. This simple pricing structure allows me to design improved low-dimensional state-space filtering setups for estimating AJD models. In a simulated setting, I show that in the presence of unspanned skewness, a simple filtering setup which includes only prices of skew and variance swaps offers significant improvements over a high-dimensional filter which treats all observed option prices as observable inputs

    Limits of Learning about a Categorical Latent Variable under Prior Near-Ignorance

    Get PDF
    In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called \emph{near-ignorance}, has been proposed. Near-ignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is \emph{latent}. We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general \emph{manifest} variables) a condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior near-ignorance in real statistical problems.Comment: 27 LaTeX page

    Learning about a Categorical Latent Variable under Prior Near-Ignorance

    Full text link
    It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls near-ignorance, that permits learning to take place. In this paper we provide new and substantial evidence that also near-ignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we show, for the case of categorical latent variables (and general manifest variables) that there is a sufficient condition that, if satisfied, prevents learning to take place under prior near-ignorance. This condition is shown to be easily satisfied in the most common statistical problems.Comment: 15 LaTeX page

    Aligning capital with risk

    Get PDF
    The interaction of capital and risk is of primary interest in the corporate governance of banks as it links operational profitability and strategic risk management. Senior executives understand that their organization's monitoring system strongly affects the behaviour of managers and employees. Typical instruments used by senior executives to focus on strategy are balanced scorecards with objectives for performance and risk management, including an according payroll process. A top-down capital-at-risk concept gives the executive board the desired control of the operative behaviour of all risk takers. It guarantees uniform compensations for business risks taken in any division or business area. The standard theory of cost-of-capital assumes standardized assets. Return distributions are equally normalized to a one-year risk horizon. It must be noted that risk measurement and management for any individual risk factor has a bottom-up design. The typical risk horizon for trading positions is 10 days, 1 month for treasury positions, 1 year for operational risks and even longer for credit risks. My contribution to the discussion is as follows: in the classical theory, one determines capital requirements and risk measurement using a top-down approach, without specifying market and regulation standards. In my thesis I show how to close the gap between bottom-up risk modelling and top-down capital alignment. I dedicate a separate paper to each risk factor and its application in risk capital management

    Essays on variance risk

    Get PDF
    My PhD thesis consists of three papers which study the nature, structure, dynamics and price of variance risks. As tool I make use of multivariate affine jump-diffusion models with matrix-valued state spaces. The first chapter proposes a new three-factor model for index option pricing. A core feature of the model are unspanned skewness and term structure effects, i.e., it is possible that the structure of the volatility surface changes without a change in the volatility level. The model reduces pricing errors compared to benchmark two-factor models by up to 22%. Using a decomposition of the latent state, I show that this superior performance is directly linked to a third volatility factor which is unrelated to the volatility level. The second chapter studies the price of the smile, which is defined as the premia for individual option risk factors. These risk factors are directly linked to the variance risk premium (VRP). I find that option risk premia are spanned by mid-run and long-run volatility factors, while the large high-frequency factor does not enter the price of the smile. I find the VRP to be unambiguously negative and decompose it into three components: diffusive risk, jump risk and jump intensity risk. The distinct term structure patterns of these components explain why the term structure of the VRP is downward sloping in normal times and upward sloping during market distress. In predictive regressions, I find an economically relevant predictive power over returns to volatility positions and S&P 500 index returns. The last chapter introduces several numerical methods necessary for estimating matrix-valued affine option pricing models, including the Matrix Rotation Count algorithm and a fast evaluation scheme for the Likelihood function

    Robust inference with GMM estimators

    Get PDF
    The local robustness properties of Generalized Method of Moments (GMM) estimators and of a broad class of GMM based tests are investigated in a unified framework. GMM statistics are shown to have bounded influence if and only if the function defining the orthogonality restrictions imposed on the underlying model is bounded. Since in many applications this function is unbounded, it is useful to have procedures that modify the starting orthogonality conditions in order to obtain a robust version of a GMM estimator or test. We show how this can be obtained when a reference model for the data distribution can be assumed. We develop a exible algorithm for constructing a robust GMM (RGMM) estimator leading to stable GMM test statistics. The amount of robustness can be controlled by an appropriate tuning constant. We relate by an explicit formula the choice of this constant to the maximal admissible bias on the level or (and) the power of a GMM test and the amount of contamination that one can reasonably assume given some information on the data. Finally, we illustrate the RGMM methodology with some simulations of an application to RGMM testing for conditional heteroscedasticity in a simple linear autoregressive model. In this example we find a significant instability of the size and the power of a classical GMM testing procedure under a non-normal conditional error distribution. On the other side, the RGMM testing procedures can control the size and the power of the test under nonstandard conditions while maintaining a satisfactoy power under an approximatively normal conditional error distribution
    corecore