Skip to main content
Article thumbnail
Location of Repository

What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

By Bao-Gang Hu

Abstract

In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-information classifiers in terms of their application advantages, respectively.Comment: (2nd version: 19 pages, 5 figures, 7 tables. Theorems on Bayesian classifiers are extended to multiple variables. Appendix B for "Tighter bounds between the conditional entropy and Bayesian error in binary classifications" are added, in which Fano's bound is shown numerically to be very tight

Topics: Computer Science - Information Theory
Year: 2012
DOI identifier: 10.1109/TNNLS.2013.2274799
OAI identifier: oai:arXiv.org:1105.0051
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://arxiv.org/abs/1105.0051 (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.