: The performance of a single classifier is often inadequate in difficult classification problems. In such cases, several researchers have combined the outputs of multiple classifiers to obtain better performance. However, the amount of improvement possible through such combination techniques is generally not known. This article presents two approaches to estimating performance limits in hybrid networks. First, we present a framework that estimates Bayes error rates when linear combiners are used. Then we discuss a more general method that provides decision confidences and error bounds based on error types arising from the training data. The methods are illustrated for a difficult four class problem involving underwater acoustic data. For this data, we compute the single classifier and combiner classification performances, as well as the Bayes error rate and an error bound. INTRODUCTION In difficult classification problems with limited number of training data, high dime..