Skip to main content
Article thumbnail
Location of Repository

Uniform Glivenko-Cantelli theorems and concentration of measure in the mathematical modelling of learning

By Martin Anthony


This paper surveys certain developments in the use of probabilistic techniques for the modelling of generalization in machine learning. Building on ‘uniform convergence’ results in probability theory, a number of approaches to the problem of quantifying generalization have been developed in recent years. Initially these models addressed binary classification, and as such were applicable, for example, to binary-output neural networks. More recently, analysis has been extended to apply to regression problems, and to classification problems in which the classification is achieved by using real-valued functions (in which the concept of a large margin has proven useful). In order to obtain more useful and realistic bounds, and to analyse model selection, another development has been the derivation of datadependent bounds. Here, we discuss some of the main probabilistic techniques and key results, particularly the use (and derivation of) uniform Glivenko-Cantelli theorems, and the use of concentration of measure results. Many details are omitted, the aim being to give a high-level overview of the types of approaches taken and methods used

Topics: QA Mathematics
Publisher: Centre for Discrete and Applicable Mathematics, London School of Economics and Political Science
Year: 2002
OAI identifier:
Provided by: LSE Research Online
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.