Location of Repository

The main question A probabilistic classifier is a mapping h: X → [0, 1], where we interpret h(x) as the probability to predict the label +1 and 1 − h(x) is the probability to predict the label 0. The error of h on a classification example (x, y) ∈ X × {0, 1} is |h(x) − y|, which is the expected 0-1 error. Given a distribution D on X × {0, 1}, the (generalization) error of h is: err(h) = E (x,y)∼D[|h(x) − y|]. Let X be the unit ℓ2 ball of a Hilbert space, let φ: R → [0, 1] be a transfer function, and consider the class of probabilistic classifiers: H = {h(w) = φ(〈w, x〉) : ‖w‖2 ≤ 1}, where 〈w, x 〉 is the inner product between the vectors x and w. For the 0-1 transfer function, φ0−1(a) = sgn(a)+

Year: 2011

OAI identifier:
oai:CiteSeerX.psu:10.1.1.188.279

Provided by:
CiteSeerX

Download PDF: