2,030 research outputs found

    Robust estimation of dimension reduction space

    Get PDF
    Most dimension reduction methods based on nonparametric smoothing are highly sensitive to outliers and to data coming from heavy-tailed distributions. We show that the recently proposed methods by Xia et al. (2002) can be made robust in such a way that preserves all advantages of the original approach. Their extension based on the local one-step M-estimators is su±ciently robust to outliers and data from heavy tailed distributions, it is relatively easy to implement, and surprisingly, it performs as well as the original methods when applied to normally distributed data.Dimension reduction, Nonparametric regression, M-estimation

    Long Memory Persistence in the Factor of Implied Volatility Dynamics

    Get PDF
    The volatility implied by observed market prices as a function of the strike and time to maturity form an Implied Volatility Surface (IV S). Practical applications require reducing the dimension and characterize its dynamics through a small number of factors. Such dimension reduction is summarized by a Dynamic Semiparametric Factor Model (DSFM) that characterizes the IV S itself and their movements across time by a multivariate time series of factor loadings. This paper focuses on investigating long range dependence in the factor loadings series. Our result reveals that shocks to volatility persist for a very long time, affecting significantly stock prices. For appropriate representation of the series dynamics and the possibility of improved forecasting, we model the long memory in levels and absolute returns using the class of fractional integrated volatility models that provide flexible structure to capture the slow decaying autocorrelation function reasonably well.Implied Volatility, Dynamic Semiparametric Factor Modeling, Long Memory, Fractional Integrated Volatility Models.

    Robust Econometrics

    Get PDF
    Econometrics often deals with data under, from the statistical point of view, non-standard conditions such as heteroscedasticity or measurement errors and the estimation methods need thus be either adopted to such conditions or be at least insensitive to them. The methods insensitive to violation of certain assumptions, for example insensitive to the presence of heteroscedasticity, are in a broad sense referred to as robust (e.g., to heteroscedasticity). On the other hand, there is also a more specific meaning of the word `robust`, which stems from the field of robust statistics. This latter notion defines robustness rigorously in terms of behavior of an estimator both at the assumed (parametric) model and in its neighborhood in the space of probability distributions. Even though the methods of robust statistics have been used only in the simplest setting such as estimation of location, scale, or linear regression for a long time, they motivated a range of new econometric methods recently, which we focus on in this chapter.

    Calibration Design of Implied Volatility Surfaces

    Get PDF
    The calibration of option pricing models leads to the minimization of an error functional. We show that its usual specification as a root mean squared error implies fluctuating exotics prices and possibly wrong prices. We propose a simple and natural method to overcome these problems, illustrate drawbacks of the usual approach and show advantages of our method. To this end, we calibrate the Heston model to a time series of DAX implied volatility surfaces and then price cliquet options.calibration, data design, implied volatility surface, Heston model, cliquet option

    The Stochastic Fluctuation of the Quantile Regression Curve

    Get PDF
    Let (X1, Y1), . . ., (Xn, Yn) be i.i.d. rvs and let l(x) be the unknown p-quantile regression curve of Y on X. A quantile-smoother ln(x) is a localised, nonlinear estimator of l(x). The strong uniform consistency rate is established under general conditions. In many applications it is necessary to know the stochastic fluctuation of the process {ln(x) - l(x)}. Using strong approximations of the empirical process and extreme value theory allows us to consider the asymptotic maximal deviation sup06x61 |ln(x)-l(x)|. The derived result helps in the construction of a uniform confidence band for the quantile curve l(x). This confidence band can be applied as a model check, e.g. in econometrics. An application considers a labour market discrimination effect.Quantile Regression, Consistency Rate, Confidence Band, Check Function, Kernel Smoothing, Nonparametric Fitting

    Statistics of Risk Aversion

    Get PDF
    Information about risk preferences from investors is essential for modelling a wide range of quantitative finance applications. Valuable information related to preferences can be extracted from option prices through pricing kernels. In this paper, pricing kernels and their term structure are estimated in a time varying approach from DAX and ODAX data using dynamic semiparametric factor model (DSFM). DSFM smooths in time and space simultaneously, approximating complex dynamic structures by basis functions and a time series of loading coefficients. Contradicting standard risk aversion assumptions, the estimated pricing kernels indicate risk proclivity in certain levels of return. The analysis of the time series of loading coefficients allows a better understanding of the dynamic behaviour from investors preferences towards risk.Dynamic Semiparametric Estimation, Pricing Kernel, Risk Aversion.

    Value-at-Risk Calculations with Time Varying Copulae

    Get PDF
    Value-at-Risk (VaR) of a portfolio is determined by the multivariate distribution of the risk factors increments. This distribution can be modelled through copulae, where the copulae parameters are not necessarily constant over time. For an exchange rate portfolio, copulae with time varying parameters are estimated and the VaR simulated accordingly. Backtesting underlines the improved performance of time varying copulae.Value-at-Risk,VaR, portfolio, copulae

    Working with the XQC

    Get PDF
    An enormous number of statistical methods have been developed in quantitive finance during the last decades. Nonparametric methods, bootstrapping time series, wavelets, estimation of diffusion coefficients are now almost standard in statistical applications. To implement these new methods the method developer usually uses a programming environment he is familiar with. Thus, such methods are only available for preselected software packages, but not for widely used standard software packages like MS Excel. To apply these new methods to empirical data a potential user faces a number of problems or it may even be impossible for him to use the methods without rewriting them in a different programming language. Even if one wants to apply a newly developed method to simulated data in order to understand the methodology one is confronted with the drawbacks described above. A very similar problem occurs in teaching statistics at undergraduate level. Since students usually have their preferred software and often do not have access to the same statistical software packages as their teacher, illustrating examples have to be executable with standard tools. In general, two statisticians are on either side of the distribution process of newly implemented methods, the provider (inventor) of a new technique (algorithm) and the user who wants to apply (understand) the new technique. The aim of the XploRe Quantlet client/server architecture is to bring these statisticians closer to each other. The XploRe Quantlet Client (XQC) represents the front end - the user interface (UI) of this architecture allowing to access the XploRe server and its methods and data. The XQC is fully programmed in Java not depending on a specific computer platform. It runs on Windows and Mac platforms as well as on Unix and Linux machines.XploRe Quantlet Client, quantitive finance, application, applet

    Common Functional Implied Volatility Analysis

    Get PDF
    Trading, hedging and risk analysis of complex option portfolios depend on accurate pricing models. The modelling of implied volatilities (IV) plays an important role, since volatility is the crucial parameter in the Black-Scholes (BS) pricing formula. It is well known from empirical studies that the volatilities implied by observed market prices exhibit patterns known as volatility smiles or smirks that contradict the assumption of constant volatility in the BS pricing model. On the other hand, the IV is a function of two parameters: the strike price and the time to maturity and it is desirable in practice to reduce the dimension of this object and characterize the IV surface through a small number of factors. Clearly, a dimension reduced pricing-model that should reflect the dynamics of the IV surface needs to contain factors and factor loadings that characterize the IV surface itself and their movements across time.implied volatility, Black-Scholes, option portfolio, pricing

    Value-at-Risk and Expected Shortfall when there is long range dependence.

    Get PDF
    Empirical studies have shown that a large number of financial asset returns exhibit fat tails and are often characterized by volatility clustering and asymmetry. Also revealed as a stylized fact is Long memory or long range dependence in market volatility, with significant impact on pricing and forecasting of market volatility. The implication is that models that accomodate long memory hold the promise of improved long-run volatility forecast as well as accurate pricing of long-term contracts. On the other hand, recent focus is on whether long memory can affect the measurement of market risk in the context of Value-at- Risk (V aR). In this paper, we evaluate the Value-at-Risk (V aR) and Expected Shortfall (ESF) in financial markets under such conditions. We examine one equity portfolio, the British FTSE100 and three stocks of the German DAX index portfolio (Bayer, Siemens and Volkswagen). Classical V aR estimation methodology such as exponential moving average (EMA) as well as extension to cases where long memory is an inherent characteristics of the system are investigated. In particular, we estimate two long memory models, the Fractional Integrated Asymmetric Power-ARCH and the Hyperbolic-GARCH with different error distribution assumptions. Our results show that models that account for asymmetries in the volatility specifications as well as fractional integrated parametrization of the volatility process, perform better in predicting the one-step as well as five-step ahead V aR and ESF for short and long positions than short memory models. This suggests that for proper risk valuation of options, the degree of persistence should be investigated and appropriate models that incorporate the existence of such characteristic be taken into account.Backtesting, Value-at-Risk, Expected Shortfall, Long Memory, Fractional Integrated Volatility Models
    corecore