2,369,466 research outputs found

    The Prediction value

    Full text link
    We introduce the prediction value (PV) as a measure of players' informational importance in probabilistic TU games. The latter combine a standard TU game and a probability distribution over the set of coalitions. Player ii's prediction value equals the difference between the conditional expectations of v(S)v(S) when ii cooperates or not. We characterize the prediction value as a special member of the class of (extended) values which satisfy anonymity, linearity and a consistency property. Every nn-player binomial semivalue coincides with the PV for a particular family of probability distributions over coalitions. The PV can thus be regarded as a power index in specific cases. Conversely, some semivalues -- including the Banzhaf but not the Shapley value -- can be interpreted in terms of informational importance.Comment: 26 pages, 2 table

    Improving Value-at-Risk prediction under model uncertainty

    Full text link
    Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-tt residuals and the extreme value theory-based approach are particularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S\&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors.Comment: 42 pages, 7 figures, 7 table

    An Infrared Divergence Problem in the cosmological measure theory and the anthropic reasoning

    Full text link
    An anthropic principle has made it possible to answer the difficult question of why the observable value of cosmological constant (Λ1047\Lambda\sim 10^{-47} GeV4{}^4) is so disconcertingly tiny compared to predicted value of vacuum energy density ρSUSY1012\rho_{SUSY}\sim 10^{12} GeV4{}^4. Unfortunately, there is a darker side to this argument, as it consequently leads to another absurd prediction: that the probability to observe the value Λ=0\Lambda=0 for randomly selected observer exactly equals to 1. We'll call this controversy an infrared divergence problem. It is shown that the IRD prediction can be avoided with the help of a Linde-Vanchurin {\em singular runaway measure} coupled with the calculation of relative Bayesian probabilities by the means of the {\em doomsday argument}. Moreover, it is shown that while the IRD problem occurs for the {\em prediction stage} of value of Λ\Lambda, it disappears at the {\em explanatory stage} when Λ\Lambda has already been measured by the observer.Comment: 9 pages, RevTe

    Hard Colour Singlet Exchange at the Tevatron

    Get PDF
    We have performed a detailed phenomenological investigation of the hard colour singlet exchange process which is observed at the Tevatron in events which have a large rapidity gap between outgoing jets. We include the effects of multiple interactions to obtain a prediction for the gap survival factor. Comparing the data on the fraction of gap events with the prediction from BFKL pomeron exchange we find agreement provided that a constant value of alpha_s is used in the BFKL calculation. Moreover, the value of alpha_s is in line with that extracted from measurements made at HERA.Comment: 22 pages, 19 figure

    Discretized conformal prediction for efficient distribution-free inference

    Full text link
    In regression problems where there is no known true underlying model, conformal prediction methods enable prediction intervals to be constructed without any assumptions on the distribution of the underlying data, except that the training and test data are assumed to be exchangeable. However, these methods bear a heavy computational cost-and, to be carried out exactly, the regression algorithm would need to be fitted infinitely many times. In practice, the conformal prediction method is run by simply considering only a finite grid of finely spaced values for the response variable. This paper develops discretized conformal prediction algorithms that are guaranteed to cover the target value with the desired probability, and that offer a tradeoff between computational cost and prediction accuracy
    corecore