1,125 research outputs found

    GHICA - Risk Analysis with GH Distributions and Independent Components

    Get PDF
    Over recent years, study on risk management has been prompted by the Basel committee for regular banking supervisory. There are however limitations of some widely-used risk management methods that either calculate risk measures under the Gaussian distributional assumption or involve numerical difficulty. The primary aim of this paper is to present a realistic and fast method, GHICA, which overcomes the limitations in multivariate risk analysis. The idea is to first retrieve independent components (ICs) out of the observed high-dimensional time series and then individually and adaptively fit the resulting ICs in the generalized hyperbolic (GH) distributional framework. For the volatility estimation of each IC, the local exponential smoothing technique is used to achieve the best possible accuracy of estimation. Finally, the fast Fourier transformation technique is used to approximate the density of the portfolio returns. The proposed GHICA method is applicable to covariance estimation as well. It is compared with the dynamic conditional correlation (DCC) method based on the simulated data with d = 50 GH distributed components. We further implement the GHICA method to calculate risk measures given 20-dimensional German DAX portfolios and a dynamic exchange rate portfolio. Several alternative methods are considered as well to compare the accuracy of calculation with the GHICA one.Multivariate Risk Management, Independent Component Analysis, Generalized Hyperbolic Distribution, Local Exponential Estimation, Value at Risk, Expected Shortfall.

    Bayesian Estimation of a Stochastic Volatility Model Using Option and Spot Prices: Application of a Bivariate Kalman Filter

    Get PDF
    In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a hybrid Markov Chain Monte Carlo sampling algorithm. Candidate draws for the unobserved volatilities are obtained by applying the Kalman filter and smoother to a linearization of a state-space representation of the model. The method is illustrated using the Heston (1993) stochastic volatility model applied to Australian News Corporation spot and option price data. Alternative models nested in the Heston framework are ranked via Bayes Factors and via fit, predictive and hedging performance.Option Pricing; Volatility Risk; Markov Chain Monte Carlo; Nonlinear State Space Model; Kalman Filter and Smoother.

    Improved Convergence Rate of Nested Simulation with LSE on Sieve

    Full text link
    Nested simulation encompasses the estimation of functionals linked to conditional expectations through simulation techniques. In this paper, we treat conditional expectation as a function of the multidimensional conditioning variable and provide asymptotic analyses of general Least Squared Estimators on sieve, without imposing specific assumptions on the function's form. Our study explores scenarios in which the convergence rate surpasses that of the standard Monte Carlo method and the one recently proposed based on kernel ridge regression. We also delve into the conditions that allow for achieving the best possible square root convergence rate among all methods. Numerical experiments are conducted to support our statements

    The Household Wealth Distribution in Spain: The Role of Housing and Financial Wealth

    Get PDF
    We analyse the distribution of household wealth in Spain using the first wave of the Spanish Survey of Household Finances, conducted by the Bank of Spain in 2002. We study the distribution of the different wealth components and, using inequality decomposition techniques, we assess the contribution of each element to overall wealth inequality. We find that wealth is more unequally distributed than income, while housing wealth is much more evenly distributed than financial wealth. Moreover, we identify two groups of wealth components: one disequalizing group, which includes financial wealth, whose value and portfolio share increase with household wealth; and a second more equalizing one, including housing wealth, whose value increases with wealth, but their share in the portfolio does not. Finally, we show that differences between age groups do not explain why wealth is much more unequally distributed than income. Instead, business and home ownership are factors that clearly contribute to explain this feature.Wealth, income, distribution, inequality decomposition.

    Importance Sampling and its Optimality for Stochastic Simulation Models

    Full text link
    We consider the problem of estimating an expected outcome from a stochastic simulation model. Our goal is to develop a theoretical framework on importance sampling for such estimation. By investigating the variance of an importance sampling estimator, we propose a two-stage procedure that involves a regression stage and a sampling stage to construct the final estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of the final estimator. We analyze the variance reduction rates and derive oracle properties of both methods. We evaluate the empirical performances of the methods using two numerical examples and a case study on wind turbine reliability evaluation.Comment: 37 pages, 6 figures, 2 tables. Accepted to the Electronic Journal of Statistic

    CAViaR: Conditional Value at Risk by Quantile Regression

    Get PDF
    Value at Risk has become the standard measure of market risk employed by financial institutions for both internal and regulatory purposes. Despite its conceptual simplicity, its measurement is a very challenging statistical problem and none of the methodologies developed so far give satisfactory solutions. Interpreting Value at Risk as a quantile of future portfolio values conditional on current information, we propose a new approach to quantile estimation which does not require any of the extreme assumptions invoked by existing methodologies (such as normality or i.i.d. returns). The Conditional Value at Risk or CAViaR model moves the focus of attention from the distribution of returns directly to the behavior of the quantile. We postulate a variety of dynamic processes for updating the quantile and use regression quantile estimation to determine the parameters of the updating process. Tests of model adequacy utilize the criterion that each period the probability of exceeding the VaR must be independent of all the past information. We use a differential evolutionary genetic algorithm to optimize an objective function which is non-differentiable and hence cannot be optimized using traditional algorithms. Applications to simulated and real data provide empirical support to our methodology and illustrate the ability of these algorithms to adapt to new risk environments.
    corecore