Article thumbnail

Boosting Bayesian MAP Classification

By Paolo Piro, Richard Nock, Frank Nielsen and Michel Barlaud


Instance-based classification It has been shown to be very effective for image classification (e.g., k-nearest neighbors). Such methods can be viewed as primers to improve the estimation of the class membership probabilities. Generalization: from k-NN to a new supervised instance-based rule. • Framework: supervised Bayesian maximum-a-posteriori (MAP). • Use annotated examples for estimating pointwise class probabilities in the feature space. • Boosting the local class density estimation: global minimization of a multiclass exponential risk. • Prototype learning: induce a strong classifier from a combination of sparse training examples (“prototypes”). Leveraged MAP classification P ℓ T(c|x) = t:y (c) t =1 αt ˆ ft(x) ∑T t=1 αt ˆ ft(x) • subset of T prototypes: the most relevant training examples. • leveraging coefficient αt: measures the “confidence ” of prototype t. Use prototypes as weak classifiers for the leveraged MAP classification rule: Objectives: ĉ = arg max T∑ t=1 αtytc ˆ ft(x). 1. Learn prototypes and their weights αt. 2. Estimate ˆ ft(x) from those prototypes. Convex optimization problem • Iterative minimization procedure. • Select a new prototype j at each iteration t. • Classifier’s update: h (t) c (xi) = h (t−1) c (xi) + αtyjc ˆ fj(xi). • At each iteration, compute the unique solution of: m∑ arg min wi · exp {−αtˆrij}, αt where: i=1 ⋆ wi> 0 weights defined over the training data (repeatedly updated). ⋆ˆrij edge matrix (contant along iterations): ˆrij = ˆ fj(xi

Year: 2010
DOI identifier: 10.1109/icpr.2010.167
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • http://blog.informationgeometr... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.