3,926 research outputs found
Large-Scale User Modeling with Recurrent Neural Networks for Music Discovery on Multiple Time Scales
The amount of content on online music streaming platforms is immense, and
most users only access a tiny fraction of this content. Recommender systems are
the application of choice to open up the collection to these users.
Collaborative filtering has the disadvantage that it relies on explicit
ratings, which are often unavailable, and generally disregards the temporal
nature of music consumption. On the other hand, item co-occurrence algorithms,
such as the recently introduced word2vec-based recommenders, are typically left
without an effective user representation. In this paper, we present a new
approach to model users through recurrent neural networks by sequentially
processing consumed items, represented by any type of embeddings and other
context features. This way we obtain semantically rich user representations,
which capture a user's musical taste over time. Our experimental analysis on
large-scale user data shows that our model can be used to predict future songs
a user will likely listen to, both in the short and long term.Comment: Author pre-print version, 20 pages, 6 figures, 4 table
Recurrent Latent Variable Networks for Session-Based Recommendation
In this work, we attempt to ameliorate the impact of data sparsity in the
context of session-based recommendation. Specifically, we seek to devise a
machine learning mechanism capable of extracting subtle and complex underlying
temporal dynamics in the observed session data, so as to inform the
recommendation algorithm. To this end, we improve upon systems that utilize
deep learning techniques with recurrently connected units; we do so by adopting
concepts from the field of Bayesian statistics, namely variational inference.
Our proposed approach consists in treating the network recurrent units as
stochastic latent variables with a prior distribution imposed over them. On
this basis, we proceed to infer corresponding posteriors; these can be used for
prediction and recommendation generation, in a way that accounts for the
uncertainty in the available sparse training data. To allow for our approach to
easily scale to large real-world datasets, we perform inference under an
approximate amortized variational inference (AVI) setup, whereby the learned
posteriors are parameterized via (conventional) neural networks. We perform an
extensive experimental evaluation of our approach using challenging benchmark
datasets, and illustrate its superiority over existing state-of-the-art
techniques
Character-level Recurrent Neural Networks in Practice: Comparing Training and Sampling Schemes
Recurrent neural networks are nowadays successfully used in an abundance of
applications, going from text, speech and image processing to recommender
systems. Backpropagation through time is the algorithm that is commonly used to
train these networks on specific tasks. Many deep learning frameworks have
their own implementation of training and sampling procedures for recurrent
neural networks, while there are in fact multiple other possibilities to choose
from and other parameters to tune. In existing literature this is very often
overlooked or ignored. In this paper we therefore give an overview of possible
training and sampling schemes for character-level recurrent neural networks to
solve the task of predicting the next token in a given sequence. We test these
different schemes on a variety of datasets, neural network architectures and
parameter settings, and formulate a number of take-home recommendations. The
choice of training and sampling scheme turns out to be subject to a number of
trade-offs, such as training stability, sampling time, model performance and
implementation effort, but is largely independent of the data. Perhaps the most
surprising result is that transferring hidden states for correctly initializing
the model on subsequences often leads to unstable training behavior depending
on the dataset.Comment: 23 pages, 11 figures, 4 table
- …