Location of Repository

TTI-Chicago

By Shai Shalev-shwartz, Ohad Shamir and Karthik Sridharan

Abstract

The main question A probabilistic classifier is a mapping h: X → [0, 1], where we interpret h(x) as the probability to predict the label +1 and 1 − h(x) is the probability to predict the label 0. The error of h on a classification example (x, y) ∈ X × {0, 1} is |h(x) − y|, which is the expected 0-1 error. Given a distribution D on X × {0, 1}, the (generalization) error of h is: err(h) = E (x,y)∼D[|h(x) − y|]. Let X be the unit ℓ2 ball of a Hilbert space, let φ: R → [0, 1] be a transfer function, and consider the class of probabilistic classifiers: H = {h(w) = φ(〈w, x〉) : ‖w‖2 ≤ 1}, where 〈w, x 〉 is the inner product between the vectors x and w. For the 0-1 transfer function, φ0−1(a) = sgn(a)+

Year: 2011
OAI identifier: oai:CiteSeerX.psu:10.1.1.188.279
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.huji.ac.il/%7Eoh... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.