1 research outputs found

    What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    Full text link
    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-information classifiers in terms of their application advantages, respectively.Comment: (2nd version: 19 pages, 5 figures, 7 tables. Theorems on Bayesian classifiers are extended to multiple variables. Appendix B for "Tighter bounds between the conditional entropy and Bayesian error in binary classifications" are added, in which Fano's bound is shown numerically to be very tight
    corecore