Modeling document structure is of great importance for discourse analysis and
related applications. The goal of this research is to capture the document
intent structure by modeling documents as a mixture of topic words and
rhetorical words. While the topics are relatively unchanged through one
document, the rhetorical functions of sentences usually change following
certain orders in discourse. We propose GMM-LDA, a topic modeling based
Bayesian unsupervised model, to analyze the document intent structure
cooperated with order information. Our model is flexible that has the ability
to combine the annotations and do supervised learning. Additionally, entropic
regularization can be introduced to model the significant divergence between
topics and intents. We perform experiments in both unsupervised and supervised
settings, results show the superiority of our model over several
state-of-the-art baselines.Comment: Accepted by AAAI 201