1,035 research outputs found

    Asymptotic Conditional Distribution of Exceedance Counts: Fragility Index with Different Margins

    Full text link
    Let X=(X1,...,Xd)\bm X=(X_1,...,X_d) be a random vector, whose components are not necessarily independent nor are they required to have identical distribution functions F1,...,FdF_1,...,F_d. Denote by NsN_s the number of exceedances among X1,...,XdX_1,...,X_d above a high threshold ss. The fragility index, defined by FI=limsE(NsNs>0)FI=\lim_{s\nearrow}E(N_s\mid N_s>0) if this limit exists, measures the asymptotic stability of the stochastic system X\bm X as the threshold increases. The system is called stable if FI=1FI=1 and fragile otherwise. In this paper we show that the asymptotic conditional distribution of exceedance counts (ACDEC) pk=limsP(Ns=kNs>0)p_k=\lim_{s\nearrow}P(N_s=k\mid N_s>0), 1kd1\le k\le d, exists, if the copula of X\bm X is in the domain of attraction of a multivariate extreme value distribution, and if lims(1Fi(s))/(1Fκ(s))=γi[0,)\lim_{s\nearrow}(1-F_i(s))/(1-F_\kappa(s))=\gamma_i\in[0,\infty) exists for 1id1\le i\le d and some κ1,...,d\kappa\in{1,...,d}. This enables the computation of the FI corresponding to X\bm X and of the extended FI as well as of the asymptotic distribution of the exceedance cluster length also in that case, where the components of X\bm X are not identically distributed

    Theoretical Sensitivity Analysis for Quantitative Operational Risk Management

    Full text link
    We study the asymptotic behavior of the difference between the values at risk VaR(L) and VaR(L+S) for heavy tailed random variables L and S for application in sensitivity analysis of quantitative operational risk management within the framework of the advanced measurement approach of Basel II (and III). Here L describes the loss amount of the present risk profile and S describes the loss amount caused by an additional loss factor. We obtain different types of results according to the relative magnitudes of the thicknesses of the tails of L and S. In particular, if the tail of S is sufficiently thinner than the tail of L, then the difference between prior and posterior risk amounts VaR(L+S) - VaR(L) is asymptotically equivalent to the expectation (expected loss) of S.Comment: 21 pages, 1 figure, 4 tables, forthcoming in International Journal of Theoretical and Applied Finance (IJTAF

    Bridging the ARCH model for finance and nonextensive entropy

    Full text link
    Engle's ARCH algorithm is a generator of stochastic time series for financial returns (and similar quantities) characterized by a time-dependent variance. It involves a memory parameter bb (b=0b=0 corresponds to {\it no memory}), and the noise is currently chosen to be Gaussian. We assume here a generalized noise, namely qnq_n-Gaussian, characterized by an index qnRq_{n} \in {\cal R} (qn=1q_{n}=1 recovers the Gaussian case, and qn>1q_n>1 corresponds to tailed distributions). We then match the second and fourth momenta of the ARCH return distribution with those associated with the qq-Gaussian distribution obtained through optimization of the entropy S_{q}=\frac{% 1-\sum_{i} {p_i}^q}{q-1}, basis of nonextensive statistical mechanics. The outcome is an {\it analytic} distribution for the returns, where an unique qqnq\ge q_n corresponds to each pair (b,qn)(b,q_n) (q=qnq=q_n if b=0 b=0). This distribution is compared with numerical results and appears to be remarkably precise. This system constitutes a simple, low-dimensional, dynamical mechanism which accommodates well within the current nonextensive framework.Comment: 4 pages, 5 figures.Figure 4 fixe

    Identifying short motifs by means of extreme value analysis

    Full text link
    The problem of detecting a binding site -- a substring of DNA where transcription factors attach -- on a long DNA sequence requires the recognition of a small pattern in a large background. For short binding sites, the matching probability can display large fluctuations from one putative binding site to another. Here we use a self-consistent statistical procedure that accounts correctly for the large deviations of the matching probability to predict the location of short binding sites. We apply it in two distinct situations: (a) the detection of the binding sites for three specific transcription factors on a set of 134 estrogen-regulated genes; (b) the identification, in a set of 138 possible transcription factors, of the ones binding a specific set of nine genes. In both instances, experimental findings are reproduced (when available) and the number of false positives is significantly reduced with respect to the other methods commonly employed.Comment: 6 pages, 5 figure

    A theory for long-memory in supply and demand

    Get PDF
    Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [2, 19]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power law distributed, this gives rise to power law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume v is proportional to v to the power -alpha and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag tau is asymptotically proportional to tau to the power -(alpha - 1). This is a long-memory process when alpha < 2. With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long-memory of price diffusion rates.Comment: 12 pages, 7 figure

    Continuous time volatility modelling: COGARCH versus Ornstein-Uhlenbeck models

    Get PDF
    We compare the probabilistic properties of the non-Gaussian Ornstein-Uhlenbeck based stochastic volatility model of Barndorff-Nielsen and Shephard (2001) with those of the COGARCH process. The latter is a continuous time GARCH process introduced by the authors (2004). Many features are shown to be shared by both processes, but differences are pointed out as well. Furthermore, it is shown that the COGARCH process has Pareto like tails under weak regularity conditions

    Extreme statistics for time series: Distribution of the maximum relative to the initial value

    Full text link
    The extreme statistics of time signals is studied when the maximum is measured from the initial value. In the case of independent, identically distributed (iid) variables, we classify the limiting distribution of the maximum according to the properties of the parent distribution from which the variables are drawn. Then we turn to correlated periodic Gaussian signals with a 1/f^alpha power spectrum and study the distribution of the maximum relative height with respect to the initial height (MRH_I). The exact MRH_I distribution is derived for alpha=0 (iid variables), alpha=2 (random walk), alpha=4 (random acceleration), and alpha=infinity (single sinusoidal mode). For other, intermediate values of alpha, the distribution is determined from simulations. We find that the MRH_I distribution is markedly different from the previously studied distribution of the maximum height relative to the average height for all alpha. The two main distinguishing features of the MRH_I distribution are the much larger weight for small relative heights and the divergence at zero height for alpha>3. We also demonstrate that the boundary conditions affect the shape of the distribution by presenting exact results for some non-periodic boundary conditions. Finally, we show that, for signals arising from time-translationally invariant distributions, the density of near extreme states is the same as the MRH_I distribution. This is used in developing a scaling theory for the threshold singularities of the two distributions.Comment: 29 pages, 4 figure

    Extreme times for volatility processes

    Get PDF
    We present a detailed study on the mean first-passage time of volatility processes. We analyze the theoretical expressions based on the most common stochastic volatility models along with empirical results extracted from daily data of major financial indices. We find in all these data sets a very similar behavior that is far from being that of a simple Wiener process. It seems necessary to include a framework like the one provided by stochastic volatility models with a reverting force driving volatility toward its normal level to take into account memory and clustering effects in volatility dynamics. We also detect in data a very different behavior in the mean first-passage time depending whether the level is higher or lower than the normal level of volatility. For this reason, we discuss asymptotic approximations and confront them to empirical results with a good agreement, specially with the ExpOU model.Comment: 10, 6 colored figure

    Value-at-risk forecasting of the CARBS Indices

    Get PDF
    Abstract: The purpose of this paper is to use calibrated univariate GARCH family models to forecast volatility and value at risk (VaR) of the CARBS indices and a global minimum variance portfolio (GMVP) constructed using the CARBS equity indices. the reliability of the different volatility forecasts are tested using the mean absolute error (MAE) and the mean squared error (MSE). The rolling forecast of VaR is tested using a back-testing procedure. The results indicate that the use of a rolling forecast from a GARCH model when estimating VaR for the CARBS indices and the GMVP is not a reliable method
    corecore