In this paper, we present a local information theoretic approach to
explicitly learn probabilistic clustering of a discrete random variable. Our
formulation yields a convex maximization problem for which it is NP-hard to
find the global optimum. In order to algorithmically solve this optimization
problem, we propose two relaxations that are solved via gradient ascent and
alternating maximization. Experiments on the MSR Sentence Completion Challenge,
MovieLens 100K, and Reuters21578 datasets demonstrate that our approach is
competitive with existing techniques and worthy of further investigation.Comment: Presented at 56th Annual Allerton Conference on Communication,
Control, and Computing, 201