13,741 research outputs found

    Selfdecomposability of Weak Variance Generalised Gamma Convolutions

    Full text link
    Weak variance generalised gamma convolution processes are multivariate Brownian motions weakly subordinated by multivariate Thorin subordinators. Within this class, we extend a result from strong to weak subordination that a driftless Brownian motion gives rise to a self-decomposable process. Under moment conditions on the underlying Thorin measure, we show that this condition is also necessary. We apply our results to some prominent processes such as the weak variance alpha-gamma process, and illustrate the necessity of our moment conditions in some cases

    Generalized Bhattacharyya and Chernoff upper bounds on Bayes error using quasi-arithmetic means

    Full text link
    Bayesian classification labels observations based on given prior information, namely class-a priori and class-conditional probabilities. Bayes' risk is the minimum expected classification cost that is achieved by the Bayes' test, the optimal decision rule. When no cost incurs for correct classification and unit cost is charged for misclassification, Bayes' test reduces to the maximum a posteriori decision rule, and Bayes risk simplifies to Bayes' error, the probability of error. Since calculating this probability of error is often intractable, several techniques have been devised to bound it with closed-form formula, introducing thereby measures of similarity and divergence between distributions like the Bhattacharyya coefficient and its associated Bhattacharyya distance. The Bhattacharyya upper bound can further be tightened using the Chernoff information that relies on the notion of best error exponent. In this paper, we first express Bayes' risk using the total variation distance on scaled distributions. We then elucidate and extend the Bhattacharyya and the Chernoff upper bound mechanisms using generalized weighted means. We provide as a byproduct novel notions of statistical divergences and affinity coefficients. We illustrate our technique by deriving new upper bounds for the univariate Cauchy and the multivariate tt-distributions, and show experimentally that those bounds are not too distant to the computationally intractable Bayes' error.Comment: 22 pages, include R code. To appear in Pattern Recognition Letter

    Model comparison with Sharpe ratios

    Get PDF
    We show how to conduct asymptotically valid tests of model comparison when the extent of model mispricing is gauged by the squared Sharpe ratio improvement measure. This is equivalent to ranking models on their maximum Sharpe ratios, effectively extending the Gibbons, Ross, and Shanken (1989) test to accommodate the comparison of nonnested models. Mimicking portfolios can be substituted for any nontraded model factors, and estimation error in the portfolio weights is taken into account in the statistical inference. A variant of the Fama and French (2018) 6-factor model, with a monthly updated version of the usual value spread, emerges as the dominant model

    Weak convergence of the empirical copula process with respect to weighted metrics

    Get PDF
    The empirical copula process plays a central role in the asymptotic analysis of many statistical procedures which are based on copulas or ranks. Among other applications, results regarding its weak convergence can be used to develop asymptotic theory for estimators of dependence measures or copula densities, they allow to derive tests for stochastic independence or specific copula structures, or they may serve as a fundamental tool for the analysis of multivariate rank statistics. In the present paper, we establish weak convergence of the empirical copula process (for observations that are allowed to be serially dependent) with respect to weighted supremum distances. The usefulness of our results is illustrated by applications to general bivariate rank statistics and to estimation procedures for the Pickands dependence function arising in multivariate extreme-value theory.Comment: 39 pages + 7 pages of supplementary material, 1 figur

    Cross-Commodity Analysis and Applications to Risk management.

    Get PDF
    The understanding of joint asset return distributions is an important ingredient for managing risks of portfolios. While this is a well-discussed issue in fixed income and equity markets, it is a challenge for energy commodities. In this paper we are concerned with describing the joint return distribution of energy related commodities futures, namely power, oil, gas, coal and carbon. The objective of the paper is threefold. First, we conduct a careful analysis of empirical returns and show how the class of multivariate generalized hyperbolic distributions performs in this context. Second, we present how risk measures can be computed for commodity portfolios based on generalized hyperbolic assumptions. And finally, we discuss the implications of our findings for risk management analyzing the exposure of power plants which represent typical energy portfolios. Our main findings are that risk estimates based on a normal distribution in the context of energy commodities can be statistically improved using generalized hyperbolic distributions. Those distributions are flexible enough to incorporate many characteristics of commodity returns and yield more accurate risk estimates. Our analysis of the market suggests that carbon allowances can be a helpful tool for controlling the risk exposure of a typical energy portfolio representing a power plantCommodities; Risk;

    ecp: An R Package for Nonparametric Multiple Change Point Analysis of Multivariate Data

    Full text link
    There are many different ways in which change point analysis can be performed, from purely parametric methods to those that are distribution free. The ecp package is designed to perform multiple change point analysis while making as few assumptions as possible. While many other change point methods are applicable only for univariate data, this R package is suitable for both univariate and multivariate observations. Estimation can be based upon either a hierarchical divisive or agglomerative algorithm. Divisive estimation sequentially identifies change points via a bisection algorithm. The agglomerative algorithm estimates change point locations by determining an optimal segmentation. Both approaches are able to detect any type of distributional change within the data. This provides an advantage over many existing change point algorithms which are only able to detect changes within the marginal distributions
    corecore