1 research outputs found
The CM Algorithm for the Maximum Mutual Information Classifications of Unseen Instances
The Maximum Mutual Information (MMI) criterion is different from the Least
Error Rate (LER) criterion. It can reduce failing to report small probability
events. This paper introduces the Channels Matching (CM) algorithm for the MMI
classifications of unseen instances. It also introduces some semantic
information methods, which base the CM algorithm. In the CM algorithm, label
learning is to let the semantic channel match the Shannon channel (Matching I)
whereas classifying is to let the Shannon channel match the semantic channel
(Matching II). We can achieve the MMI classifications by repeating Matching I
and II. For low-dimensional feature spaces, we only use parameters to construct
n likelihood functions for n different classes (rather than to construct
partitioning boundaries as gradient descent) and expresses the boundaries by
numerical values. Without searching in parameter spaces, the computation of the
CM algorithm for low-dimensional feature spaces is very simple and fast. Using
a two-dimensional example, we test the speed and reliability of the CM
algorithm by different initial partitions. For most initial partitions, two
iterations can make the mutual information surpass 99% of the convergent MMI.
The analysis indicates that for high-dimensional feature spaces, we may combine
the CM algorithm with neural networks to improve the MMI classifications for
faster and more reliable convergence.Comment: 6 pages, 4 Figure