Location of Repository

Limits To Performance Gains In Combined Neural Classifiers

By Kagan Tumer And, Kagan Tumer and Joydeep Ghosh


: The performance of a single classifier is often inadequate in difficult classification problems. In such cases, several researchers have combined the outputs of multiple classifiers to obtain better performance. However, the amount of improvement possible through such combination techniques is generally not known. This article presents two approaches to estimating performance limits in hybrid networks. First, we present a framework that estimates Bayes error rates when linear combiners are used. Then we discuss a more general method that provides decision confidences and error bounds based on error types arising from the training data. The methods are illustrated for a difficult four class problem involving underwater acoustic data. For this data, we compute the single classifier and combiner classification performances, as well as the Bayes error rate and an error bound. INTRODUCTION In difficult classification problems with limited number of training data, high dime..

OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • ftp://ftp.lans.ece.utexas.edu/... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.