71 research outputs found

    Определение тренда нестационарных процессов с использованием ортоганальных полиномов

    Get PDF
    We propose a dynamic measure of extremal connectedness across investment styles of hedge funds. Using multivariate extreme value regression techniques, we estimate this measure conditional on factors reflecting the economic uncertainty and the state of the financial markets, and derive several systemic risk indicators. Empirically, we study the dynamics of tail dependencies between investment strategies in the HFR database. We show that during crisis periods, some pairs of strategies display an increase in their extremal connectedness. Our results highlight that a proactive regulatory framework should account for the dynamic nature of the tail dependence and its link with financial stres

    Extremal connectedness of hedge funds

    Full text link
    We propose a dynamic measure of extremal connectedness tailored to the short reporting period and unbalanced nature of hedge funds data. Using multivariate extreme value regression techniques, we estimate this measure conditional on factors reflecting the economic uncertainty and the state of the financial markets, and derive risk indicators reflecting the likelihood of extreme spillovers. Empirically, we study the dynamics of tail dependencies between hedge funds grouped per investment strategies, as well as with the banking sector. We show that during crisis periods, some pairs of strategies display an increase in their extremal connectedness, revealing a higher likelihood of simultaneous extreme losses. We also find a sizable tail dependence between hedge funds and banks, indicating that banks are more likely to suffer extreme losses when the hedge fund sector does. Our results highlight that a proactive regulatory framework should account for the dynamic nature of the tail dependence and its link with financial stress.REFEX (Regression models for financial extremes

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Understanding the Economic Determinants of the Severity of Operational Losses: A regularized Generalized Pareto Regression Approach,

    Full text link
    We investigate a novel database of 10,217 extreme operational losses from the Italian bank UniCredit, covering a period of 10 years and 7 di erent event types. Our goal is to shed light on the dependence between the severity distribution of these losses and a set of macroeconomic, financial and fi rm-speci c factors. To do so, we use Generalized Pareto regression techniques, where both the scale and shape parameters are assumed to be functions of these explanatory variables. In this complex distributional regression framework, we perform the selection of the relevant covariates with a state-of-the-art penalized-likelihood estimation procedure relying on L1-norm penalty terms of the coefficients. A simulation study indicates that this approach efficiently selects covariates of interest but also tackles spurious regression issues encountered when dealing with integrated time series of covariates. The results of our empirical analysis have important implications in terms of risk management and regulatory policy. In particular, we found that high unemployment rate and low economic growth are associated with smaller probabilities of extreme severities, whereas high volatility on the financial market is associated with more extreme losses. Looking at firm speci c factors, a commercial strategy driven by non-interest incomes is associated with an increased likelihood of extreme severities. Last, we illustrate the impact of several economic scenarios on the requested capital of the total operational loss, and find important discrepancies across loss types and scenarios.RTG 1644 Scaling problem in Statistic

    On conditional dynamic skewness and directional forecast of currency exchange rates

    Full text link
    This paper studies dynamic skewness and kurtosis specifications for the purpose of directional forecasts of daily exchange rates. To do so, we formulate a GARCH-in-mean model where the innovations follow a non-Gaussian sinh-arcsinh distribution with time-varying asymmetry and shape parameters. The structural equations of these parameters allow for an effect of past stochastic shocks, autoregressive terms and interest rate differential on conditional dynamic. This model is used to predict the direction of change of three major currency pairs (USD/EUR, USD/GBP and USD/CHF) over the period 1999-2016. To account for structural breaks, we consider a state-of-the-art CUSUM test based on the probability integral transfor

    Comments to 'The time inconsistency factor: how banks adapt to their savers mix' (C. Laureti and A. Szafarz, working paper, 2012)

    Full text link
    Comments about 'The time-Inconsistency Factor: How banks adapt to their savers Mix' by C. Laureti and A. Szafarz (working paper, 2012)

    Modeling of rare events using non-normal distributions : an application in finance with the sinh-asinh distribution

    Full text link
    In 2008, the financial crisis put forward the relative inaccuracy of the market risk forecasting models in the financial industry. In particular, extreme events were shown to be regularly underestimated. This problematic, initially developed in the seminal work of Mandelbrot (1963), is mainly due to financial models using the normal law while empirical evidence show strong leptokurticity in financial time series. This stylized effect is particularly damaging the forecasting of indicators like Value-at-Risk (VAR). In this study, we try to tackle problem by testing a newly-developed probability distribution, never used in finance: sinh-arcsinh function. By creating different datasets from non-parametric and GARCH models, we adjust common functions (normal, t location-scale, GED, gen. hyperbolic) and sinh-arcsinh function on the data. We show that, regarding the leptokurtic datasets extracted from the DJA and the NIKKEI 225, the sinh-arcsinh function performs a better adjustment than any other function tested. We also tested simple VAR models using normal laws, Student’s t or sinh-arcsinh functions, to assess the operational efficiency of the sinh-arcsinh function. We show that models using sinh-arcsinh functions provide more accurate and better in-sample and out-of-sample VAR forecasts than any other model using the normal laws

    A Markov-switching Generalized additive model for compound Poisson processes, with applications to operational losses models

    Full text link
    This paper is concerned with modeling the behavior of random sums over time. Such models are particularly useful to describe the dynamics of operational losses, and to correctly estimate tail-related risk indicators. However, time-varying dependence structures make it a difficult task. To tackle these issues, we formulate a new Markov- switching generalized additive compound process combining Poisson and generalized Pareto distributions. This flexible model takes into account two important features: on the one hand, we allow all parameters of the compound loss distribution to depend on economic covariates in a flexible way. On the other hand, we allow this depen- dence to vary over time, via a hidden state process. A simulation study indicates that, even in the case of a short time series, this model is easily and well estimated with a standard maximum likelihood procedure. Relying on this approach, we analyze a novel dataset of 819 losses resulting from frauds at the Italian bank UniCredit. We show that our model improves the estimation of the total loss distribution over time, compared to standard alternatives. In particular, this model provides estimations of the 99.9% quantile that are never exceeded by the historical total losses, a feature particularly desirable for banking regulators.RTG 1644 Scaling problem in Statistic

    Nonparametric and bootstrap techniques applied to financial risk modeling

    Full text link
    For the purpose of quantifying financial risks, risk managers need to model the behavior of financial variables. However, the construction of such mathematical models is a difficult task that requires careful statistical approaches. Among the important choices that must be addressed,we can list the error distribution, the structure of the variance process, the relationship between parameters of interest and explanatory variables. In particular, one may avoid procedures that rely either on too rigid parametric assumptions or on inefficient estimation procedures. In this thesis, we develop statistical procedures that tackle some of these issues, in the context of three financial risk modelling applications. In the first application, we are interested in selecting the error distribution in a multiplicative heteroscedastic model without relying on a parametric volatility assumption. To avoid this uncertainty, we develop a set of model estimation and selection tests relying on nonparametric volatility estimators and focusing on the tails of the distribution. We illustrate this technique on UBS, BOVESPA and EUR/USD daily stock returns. In the second application, we are concerned by modeling the tail of the operational losses severity distribution, conditionally to several covariates. We develop a flexible conditional GPD model, where the shape parameter is an unspecified link function (nonparametric part) of a linear combination of covariates (single index part), avoiding the curse of dimensionality. We apply successfully this technique on two original databases, using macroeconomic and firm-specific variables as covariates. In the last application, we provide an efficient way to estimate the predictive ability of trading algorithms. Instead of relying on subjective and noisy sample splitting techniques, we propose an adaptation of the .632 bootstrap technique to the time series context. We apply these techniques on stock prices to compare 12,000 trading rules parametrizations and show that none can beat a simple buy-and-hold strategy
    corecore