3,279 research outputs found

    A tutorial on conformal prediction

    Get PDF
    Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability ϵ\epsilon, together with a method that makes a prediction y^\hat{y} of a label yy, it produces a set of labels, typically containing y^\hat{y}, that also contains yy with probability 1−ϵ1-\epsilon. Conformal prediction can be applied to any method for producing y^\hat{y}: a nearest-neighbor method, a support-vector machine, ridge regression, etc. Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right 1−ϵ1-\epsilon of the time, even though they are based on an accumulating dataset rather than on independent datasets. In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these. This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in "Algorithmic Learning in a Random World", by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).Comment: 58 pages, 9 figure

    Testing conformal mapping with kitchen aluminum foil

    Full text link
    We report an experimental verification of conformal mapping with kitchen aluminum foil. This experiment can be reproduced in any laboratory by undergraduate students and it is therefore an ideal experiment to introduce the concept of conformal mapping. The original problem was the distribution of the electric potential in a very long plate. The correct theoretical prediction was recently derived by A. Czarnecki (Can. J. Phys. 92, 1297 (2014))

    Transductive-Inductive Cluster Approximation Via Multivariate Chebyshev Inequality

    Full text link
    Approximating adequate number of clusters in multidimensional data is an open area of research, given a level of compromise made on the quality of acceptable results. The manuscript addresses the issue by formulating a transductive inductive learning algorithm which uses multivariate Chebyshev inequality. Considering clustering problem in imaging, theoretical proofs for a particular level of compromise are derived to show the convergence of the reconstruction error to a finite value with increasing (a) number of unseen examples and (b) the number of clusters, respectively. Upper bounds for these error rates are also proved. Non-parametric estimates of these error from a random sample of sequences empirically point to a stable number of clusters. Lastly, the generalization of algorithm can be applied to multidimensional data sets from different fields.Comment: 16 pages, 5 figure
    • …
    corecore