Skip to main content
Article thumbnail
Location of Repository

Function learning from interpolation

By Martin Anthony and Peter L. Bartlett

Abstract

In this paper, we study a statistical property of classes of real-valued functions that we call approximation from interpolated examples. We derive a characterization of function classes that have this property, in terms of their ‘fat-shattering function’, a notion that has proved useful in computational learning theory. The property is central to a problem of learning real-valued functions from random examples in which we require satisfactory performance from every algorithm that returns a function which approximately interpolates the training examples

Topics: QA Mathematics
Publisher: Cambridge University Press
Year: 2000
OAI identifier: oai:eprints.lse.ac.uk:7623
Provided by: LSE Research Online

Suggested articles

Citations

  1. (1989). A general lower bound on the number of examples needed for learning. doi
  2. (1995). Characterizations of learnability for classes of f0;:::;ng-valued functions. doi
  3. (1992). Computational Learning Theory: An Introduction, doi
  4. (1992). Decision theoretic generalizations of the PAC model for neural net and other learning applications. doi
  5. (1994). Ecient distribution-free learning of probabilistic concepts. doi
  6. (1961). entropy and -capacity of sets in functional spaces. doi
  7. (1994). Fat-shattering and the learnability of real-valued functions. doi
  8. (1989). Learnability and the Vapnik{Chervonenkis dimension. doi
  9. (1993). Occam's razor for functions. doi
  10. (1971). On the uniform convergence of relative frequencies of events to their probabilities. doi
  11. (1997). Scale-sensitive dimensions, uniform convergence, and learnability. doi
  12. (1991). Uniform and universal Glivenko{Cantelli classes. doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.