1,245 research outputs found

    From conformal to probabilistic prediction

    Full text link
    This paper proposes a new method of probabilistic prediction, which is based on conformal prediction. The method is applied to the standard USPS data set and gives encouraging results.Comment: 12 pages, 2 table

    Sparse Conformal Predictors

    Get PDF
    Conformal predictors, introduced by Vovk et al. (2005), serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the present paper, we propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if their number is very large. Our approach is based on combining the principle of conformal prediction with the ℓ1\ell_1 penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter ϵ>0\epsilon>0 and has a coverage probability larger than or equal to 1−ϵ1-\epsilon. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a by-product of the proposed approach, we provide a data-driven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated data

    Criteria of efficiency for conformal prediction

    Get PDF
    We study optimal conformity measures for various criteria of efficiency of classification in an idealised setting. This leads to an important class of criteria of efficiency that we call probabilistic; it turns out that the most standard criteria of efficiency used in literature on conformal prediction are not probabilistic unless the problem of classification is binary. We consider both unconditional and label-conditional conformal prediction.Comment: 31 page

    Hedging predictions in machine learning

    Get PDF
    Recent advances in machine learning make it possible to design efficient prediction algorithms for data sets with huge numbers of parameters. This paper describes a new technique for "hedging" the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours, and by many other state-of-the-art methods. The hedged predictions for the labels of new objects include quantitative measures of their own accuracy and reliability. These measures are provably valid under the assumption of randomness, traditional in machine learning: the objects and their labels are assumed to be generated independently from the same probability distribution. In particular, it becomes possible to control (up to statistical fluctuations) the number of erroneous predictions by selecting a suitable confidence level. Validity being achieved automatically, the remaining goal of hedged prediction is efficiency: taking full account of the new objects' features and other available information to produce as accurate predictions as possible. This can be done successfully using the powerful machinery of modern machine learning.Comment: 24 pages; 9 figures; 2 tables; a version of this paper (with discussion and rejoinder) is to appear in "The Computer Journal

    Conformal Prediction: a Unified Review of Theory and New Challenges

    Full text link
    In this work we provide a review of basic ideas and novel developments about Conformal Prediction -- an innovative distribution-free, non-parametric forecasting method, based on minimal assumptions -- that is able to yield in a very straightforward way predictions sets that are valid in a statistical sense also in in the finite sample case. The in-depth discussion provided in the paper covers the theoretical underpinnings of Conformal Prediction, and then proceeds to list the more advanced developments and adaptations of the original idea.Comment: arXiv admin note: text overlap with arXiv:0706.3188, arXiv:1604.04173, arXiv:1709.06233, arXiv:1203.5422 by other author

    Extended Gravity Cosmography

    Full text link
    Cosmography can be considered as a sort of a model-independent approach to tackle the dark energy/modified gravity problem. In this review, the success and the shortcomings of the Λ\LambdaCDM model, based on General Relativity and standard model of particles, are discussed in view of the most recent observational constraints. The motivations for considering extensions and modifications of General Relativity are taken into account, with particular attention to f(R)f(R) and f(T)f(T) theories of gravity where dynamics is represented by curvature or torsion field respectively. The features of f(R)f(R) models are explored in metric and Palatini formalisms. We discuss the connection between f(R)f(R) gravity and scalar-tensor theories highlighting the role of conformal transformations in the Einstein and Jordan frames. Cosmological dynamics of f(R)f(R) models is investigated through the corresponding viability criteria. Afterwards, the equivalent formulation of General Relativity (Teleparallel Equivalent General Relativity) in terms of torsion and its extension to f(T)f(T) gravity is considered. Finally, the cosmographic method is adopted to break the degeneracy among dark energy models. A novel approach, built upon rational Pad\'e and Chebyshev polynomials, is proposed to overcome limits of standard cosmography based on Taylor expansion. The approach provides accurate model-independent approximations of the Hubble flow. Numerical analyses, based on Monte Carlo Markov Chain integration of cosmic data, are presented to bound coefficients of the cosmographic series. These techniques are thus applied to reconstruct f(R)f(R) and f(T)f(T) functions and to frame the late-time expansion history of the universe with no \emph{a priori} assumptions on its equation of state. A comparison between the Λ\LambdaCDM cosmological model with f(R)f(R) and f(T)f(T) models is reported.Comment: 82 pages, 35 figures. Accepted for publication in IJMP
    • …
    corecore