160 research outputs found

    The asymptotic loss distribution in a fat-tailed factor model of portfolio credit risk

    Get PDF
    This paper extends the standard asymptotic results concerning the percentage loss distribution in the Vasicek uniform model to a setup where the systematic risk factor is non-normally distributed. We show that the asymptotic density in this new setup can still be obtained in closed form; in particular, we derive the return distributions, the densities and the quantile functions when the common factor follows two types of normal mixture distributions (a two-population scale mixture and a jump mixture) and the Student’s t distribution. Finally, we present a real-data application of the technique to data of the Intesa - San Paolo credit portfolio. The numerical experiments show that the asymptotic loss density is highly flexible and provides the analyst with a VaR which takes into account the event risk incorporated in the fat-tailed distribution of the common factor.Factor model, asymptotic loss, Value at Risk.

    Dynamic VaR models and the Peaks over Threshold method for market risk measurement: an empirical investigation during a financial crisis

    Get PDF
    This paper presents a backtesting exercise involving several VaR models for measuring market risk in a dynamic context. The focus is on the comparison of standard dynamic VaR models, ad hoc fat-tailed models and the dynamic Peaks over Threshold (POT) procedure for VaR estimation with different volatility specifications. We introduce three different stochastic processes for the losses: two of them are of the GARCH-type and one is of the EWMA-type. In order to assess the performance of the models, we implement a backtesting procedure using the log-losses of a diversified sample of 15 financial assets. The backtesting analysis covers the period March 2004 - May 2009, thus including the turmoil period corresponding to the subprime crisis. The results show that the POT approach and a Dynamic Historical Simulation method, both combined with the EWMA volatility specification, are particularly effective at high VaR coverage probabilities and outperform the other models under consideration. Moreover, VaR measures estimated with these models react quickly to the turmoil of the last part of the backtesting period, so that they seem to be efficient in high-risk periods as well.Market risk, Extreme Value Theory, Peaks over Threshold, Value at Risk, Fat tails

    Testing the Profitability of Simple Technical Trading Rules: A Bootstrap Analysis of the Italian Stock Market.

    Get PDF
    The aim of this paper consists in testing the profitability of simple technical trading rules in the Italian stock market. By means of a recently developed bootstrap methodology we assess whether technical rules based on moving averages are capable of producing excess returns with respect to the Buy-and-Hold strategy. We find that in most cases the rules are profitable and the excess return is statistically significant. However, the well-known problem of data-snooping, which seems to be confirmed by our analysis, requires some caution in the application of these methods.

    Spatial models for flood risk assessment

    Get PDF
    The problem of computing risk measures associated to flood events is extremely important not only from the point of view of civil protection systems but also because of the necessity for the municipalities of insuring against the damages. In this work we propose, in the framework of an integrated strategy, an operating solution which merges in a conditional approach the information usually available in this setup. First we use a Logistic Auto-Logistic (LAM) model for the estimation of the univariate conditional probabilities of flood events. This approach has two fundamental advantages: it allows to incorporate auxiliary information and does not require the target variables to be independent. Then we simulate the joint distribution of floodings by means of the Gibbs Sampler. Finally we propose an algorithm to increase ex post the spatial autocorrelation of the simulated events. The methodology is shown to be effective by means of an application to the estimation of the flood probability of Italian hydrographic regions

    Unsupervised Mixture Estimation via Approximate Maximum Likelihood based on the Cram\'er - von Mises distance

    Full text link
    Mixture distributions with dynamic weights are an efficient way of modeling loss data characterized by heavy tails. However, maximum likelihood estimation of this family of models is difficult, mostly because of the need to evaluate numerically an intractable normalizing constant. In such a setup, simulation-based estimation methods are an appealing alternative. The approximate maximum likelihood estimation (AMLE) approach is employed. It is a general method that can be applied to mixtures with any component densities, as long as simulation is feasible. The focus is on the dynamic lognormal-generalized Pareto distribution, and the Cram\'er - von Mises distance is used to measure the discrepancy between observed and simulated samples. After deriving the theoretical properties of the estimators, a hybrid procedure is developed, where standard maximum likelihood is first employed to determine the bounds of the uniform priors required as input for AMLE. Simulation experiments and two real-data applications suggest that this approach yields a major improvement with respect to standard maximum likelihood estimation.Comment: 31 pages, 7 figures, 14 table

    Spatial models for flood risk assessment

    Get PDF
    The problem of computing risk measures associated to flood events is extremely important not only from the point of view of civil protection systems but also because of the necessity for the municipalities of insuring against the damages. In this work we propose, in the framework of an integrated strategy, an operating solution which merges in a conditional approach the information usually available in this setup. First we use a Logistic Auto-Logistic (LAM) model for the estimation of the univariate conditional probabilities of flood events. This approach has two fundamental advantages: it allows to incorporate auxiliary information and does not require the target variables to be indepen- dent. Then we simulate the joint distribution of floodings by means of the Gibbs Sampler. Finally we propose an algorithm to increase ex post the spatial autocorrelation of the simulated events. The methodology is shown to be effective by means of an application to the estimation of the flood probability of Italian hydrographic regions.Flood Risk, Conditional Approach, LAM Model, Pseudo-Maximum Likelihood Estimation, Spatial Autocorrelation, Gibbs Sampler.

    A note on maximum likelihood estimation of a Pareto mixture

    Get PDF
    In this paper we study Maximum Likelihood Estimation of the parameters of a Pareto mixture. Application of standard techniques to a mixture of Pareto is problematic. For this reason we develop two alternative algorithms. The first one is the Simulated Annealing and the second one is based on Cross-Entropy minimization. The Pareto distribution is a commonly used model for heavy-tailed data. It is a two-parameter distribution whose shape parameter determines the degree of heaviness of the tail, so that it can be adapted to data with different features. This work is motivated by an application in the operational risk measurement field: we fit a Pareto mixture to operational losses recorded by a bank in two different business lines. Losses below an unknown threshold are discarded, so that the observed data are truncated. The thresholds used in the two business lines are unknown. Thus, under the assumption that each population follows a Pareto distribution, the appropriate model is a mixture of Pareto where all the parameters have to be estimated.

    A framework for cut-off sampling in business survey design

    Get PDF
    In sampling theory the large concentration of the population with respect to most surveyed variables constitutes a problem which is difficult to tackle by means of classical tools. One possible solution is given by cut-off sampling, which explicitly prescribes to discard part of the population; in particular, if the population is composed by firms or establishments, the method results in the exclusion of the “smallest” firms. Whereas this sampling scheme is common among practitioners, its theoretical foundations tend to be considered weak, because the inclusion probability of some units is equal to zero. In this paper we propose a framework to justify cut-off sampling and to determine the census and cut-off thresholds. We use an estimation model which assumes as known the weight of the discarded units with respect to each variable; we compute the variance of the estimator and its bias, which is caused by violations of the aforementioned hypothesis. We develop an algorithm which minimizes the MSE as a function of multivariate auxiliary information at the population level. Considering the combinatorial optimization nature of the model, we resort to the theory of stochastic relaxation: in particular, we use the simulated annealing algorithm.Cut-off sampling, skewed populations, model-based estimation, optimal stratification, simulated annealing
    • 

    corecore