Location of Repository

TTI-Chicago

By Shai Shalev-shwartz, Ohad Shamir, Nathan Srebro and Karthik Sridharan

Abstract

For supervised classification problems, it is well known that learnability is equivalent to uniform convergence of the empirical risks and thus to learnability by empirical minimization. Inspired by recent regret bounds for online convex optimization, we study stochastic convex optimization, and uncover a surprisingly different situation in the more general setting: although the stochastic convex optimization problem is learnable (e.g. using online-to-batch conversions), no uniform convergence holds in the general case, and empirical minimization might fail. Rather then being a difference between online methods and a global minimization approach, we show that the key ingredient is strong convexity and regularization. Our results demonstrate that the celebrated theorem of Alon et al on the equivalence of learnability and uniform convergence does not extend to Vapnik’s General Setting of Learning, that in the General Setting considering only empirical minimization is not enough, and that despite Vanpnik’s result on the equivalence of strict consistency and uniform convergence, uniform convergence is only a sufficient, but not necessary, condition for meaningful non-trivial learnability.

Year: 2011
OAI identifier: oai:CiteSeerX.psu:10.1.1.188.1206
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.huji.ac.il/%7Eoh... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.