3,183 research outputs found
MetaLDA: a Topic Model that Efficiently Incorporates Meta information
Besides the text content, documents and their associated words usually come
with rich sets of meta informa- tion, such as categories of documents and
semantic/syntactic features of words, like those encoded in word embeddings.
Incorporating such meta information directly into the generative process of
topic models can improve modelling accuracy and topic quality, especially in
the case where the word-occurrence information in the training data is
insufficient. In this paper, we present a topic model, called MetaLDA, which is
able to leverage either document or word meta information, or both of them
jointly. With two data argumentation techniques, we can derive an efficient
Gibbs sampling algorithm, which benefits from the fully local conjugacy of the
model. Moreover, the algorithm is favoured by the sparsity of the meta
information. Extensive experiments on several real world datasets demonstrate
that our model achieves comparable or improved performance in terms of both
perplexity and topic quality, particularly in handling sparse texts. In
addition, compared with other models using meta information, our model runs
significantly faster.Comment: To appear in ICDM 201
Contextualizing Citations for Scientific Summarization using Word Embeddings and Domain Knowledge
Citation texts are sometimes not very informative or in some cases inaccurate
by themselves; they need the appropriate context from the referenced paper to
reflect its exact contributions. To address this problem, we propose an
unsupervised model that uses distributed representation of words as well as
domain knowledge to extract the appropriate context from the reference paper.
Evaluation results show the effectiveness of our model by significantly
outperforming the state-of-the-art. We furthermore demonstrate how an effective
contextualization method results in improving citation-based summarization of
the scientific articles.Comment: SIGIR 201
- …