1 research outputs found

    Metric Entropy and Minimax Risk in Classification

    No full text
    . We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy twovalued classification problems in terms of the Assouad density and the Vapnik-Chervonenkis dimension. 1 Introduction The most basic problem in pattern recognition is the problem of classifying instances consisting of vectors of measurements into a one of a finite number of types or classes. One standard example is the recognition of isolated capital characters, in which the instances are measurements on images of letters ..
    corecore