6,343 research outputs found
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
In a variety of application domains the content to be recommended to users is
associated with text. This includes research papers, movies with associated
plot summaries, news articles, blog posts, etc. Recommendation approaches based
on latent factor models can be extended naturally to leverage text by employing
an explicit mapping from text to factors. This enables recommendations for new,
unseen content, and may generalize better, since the factors for all items are
produced by a compactly-parametrized model. Previous work has used topic models
or averages of word embeddings for this mapping. In this paper we present a
method leveraging deep recurrent neural networks to encode the text sequence
into a latent vector, specifically gated recurrent units (GRUs) trained
end-to-end on the collaborative filtering task. For the task of scientific
paper recommendation, this yields models with significantly higher accuracy. In
cold-start scenarios, we beat the previous state-of-the-art, all of which
ignore word order. Performance is further improved by multi-task learning,
where the text encoder network is trained for a combination of content
recommendation and item metadata prediction. This regularizes the collaborative
filtering model, ameliorating the problem of sparsity of the observed rating
matrix.Comment: 8 page
Joint Modeling of Topics, Citations, and Topical Authority in Academic Corpora
Much of scientific progress stems from previously published findings, but
searching through the vast sea of scientific publications is difficult. We
often rely on metrics of scholarly authority to find the prominent authors but
these authority indices do not differentiate authority based on research
topics. We present Latent Topical-Authority Indexing (LTAI) for jointly
modeling the topics, citations, and topical authority in a corpus of academic
papers. Compared to previous models, LTAI differs in two main aspects. First,
it explicitly models the generative process of the citations, rather than
treating the citations as given. Second, it models each author's influence on
citations of a paper based on the topics of the cited papers, as well as the
citing papers. We fit LTAI to four academic corpora: CORA, Arxiv Physics, PNAS,
and Citeseer. We compare the performance of LTAI against various baselines,
starting with the latent Dirichlet allocation, to the more advanced models
including author-link topic model and dynamic author citation topic model. The
results show that LTAI achieves improved accuracy over other similar models
when predicting words, citations and authors of publications.Comment: Accepted by Transactions of the Association for Computational
Linguistics (TACL); to appea
- …