339 research outputs found
Recurrent Neural Networks with Top-k Gains for Session-based Recommendations
RNNs have been shown to be excellent models for sequential data and in
particular for data that is generated by users in an session-based manner. The
use of RNNs provides impressive performance benefits over classical methods in
session-based recommendations. In this work we introduce novel ranking loss
functions tailored to RNNs in the recommendation setting. The improved
performance of these losses over alternatives, along with further tricks and
refinements described in this work, allow for an overall improvement of up to
35% in terms of MRR and Recall@20 over previous session-based RNN solutions and
up to 53% over classical collaborative filtering approaches. Unlike data
augmentation-based improvements, our method does not increase training times
significantly. We further demonstrate the performance gain of the RNN over
baselines in an online A/B test.Comment: CIKM'18, authors' versio
Top-N Recommendation on Graphs
Recommender systems play an increasingly important role in online
applications to help users find what they need or prefer. Collaborative
filtering algorithms that generate predictions by analyzing the user-item
rating matrix perform poorly when the matrix is sparse. To alleviate this
problem, this paper proposes a simple recommendation algorithm that fully
exploits the similarity information among users and items and intrinsic
structural information of the user-item matrix. The proposed method constructs
a new representation which preserves affinity and structure information in the
user-item rating matrix and then performs recommendation task. To capture
proximity information about users and items, two graphs are constructed.
Manifold learning idea is used to constrain the new representation to be smooth
on these graphs, so as to enforce users and item proximities. Our model is
formulated as a convex optimization problem, for which we need to solve the
well-known Sylvester equation only. We carry out extensive empirical
evaluations on six benchmark datasets to show the effectiveness of this
approach.Comment: CIKM 201
Dynamic Matrix Factorization with Priors on Unknown Values
Advanced and effective collaborative filtering methods based on explicit
feedback assume that unknown ratings do not follow the same model as the
observed ones (\emph{not missing at random}). In this work, we build on this
assumption, and introduce a novel dynamic matrix factorization framework that
allows to set an explicit prior on unknown values. When new ratings, users, or
items enter the system, we can update the factorization in time independent of
the size of data (number of users, items and ratings). Hence, we can quickly
recommend items even to very recent users. We test our methods on three large
datasets, including two very sparse ones, in static and dynamic conditions. In
each case, we outrank state-of-the-art matrix factorization methods that do not
use a prior on unknown ratings.Comment: in the Proceedings of 21st ACM SIGKDD Conference on Knowledge
Discovery and Data Mining 201
Fast Matrix Factorization for Online Recommendation with Implicit Feedback
This paper contributes improvements on both the effectiveness and efficiency
of Matrix Factorization (MF) methods for implicit feedback. We highlight two
critical issues of existing works. First, due to the large space of unobserved
feedback, most existing works resort to assign a uniform weight to the missing
data to reduce computational complexity. However, such a uniform assumption is
invalid in real-world settings. Second, most methods are also designed in an
offline setting and fail to keep up with the dynamic nature of online data. We
address the above two issues in learning MF models from implicit feedback. We
first propose to weight the missing data based on item popularity, which is
more effective and flexible than the uniform-weight assumption. However, such a
non-uniform weighting poses efficiency challenge in learning the model. To
address this, we specifically design a new learning algorithm based on the
element-wise Alternating Least Squares (eALS) technique, for efficiently
optimizing a MF model with variably-weighted missing data. We exploit this
efficiency to then seamlessly devise an incremental update strategy that
instantly refreshes a MF model given new feedback. Through comprehensive
experiments on two public datasets in both offline and online protocols, we
show that our eALS method consistently outperforms state-of-the-art implicit MF
methods. Our implementation is available at
https://github.com/hexiangnan/sigir16-eals.Comment: 10 pages, 8 figure
Habitat stability, predation risk and 'memory syndromes'
ArticleThis is the author's accepted version. The article has been published Open Access and is available at http://www.nature.com/srep/2015/150527/srep10538/full/srep10538.htmlCopyright © 2015 Macmillan Publishers Limited. All Rights Reserved.Habitat stability and predation pressure are thought to be major drivers in the evolutionary maintenance of behavioural syndromes, with trait covariance only occurring within specific habitats. However, animals also exhibit behavioural plasticity, often through memory formation. Memory formation across traits may be linked, with covariance in memory traits (memory syndromes) selected under particular environmental conditions. This study tests whether the pond snail, Lymnaea stagnalis, demonstrates consistency among memory traits (‘memory syndrome’) related to threat avoidance and foraging. We used eight populations originating from three different habitat types: i) laboratory populations (stable habitat, predator-free); ii) river populations (fairly stable habitat, fish predation); and iii) ditch populations (unstable habitat, invertebrate predation). At a population level, there was a negative relationship between memories related to threat avoidance and food selectivity, but no consistency within habitat type. At an individual level, covariance between memory traits was dependent on habitat. Laboratory populations showed no covariance among memory traits, whereas river populations showed a positive correlation between food memories, and ditch populations demonstrated a negative relationship between threat memory and food memories. Therefore, selection pressures among habitats appear to act independently on memory trait covariation at an individual level and the average response within a population.Leverhulme Trus
Exploring Deep Space: Learning Personalized Ranking in a Semantic Space
Recommender systems leverage both content and user interactions to generate
recommendations that fit users' preferences. The recent surge of interest in
deep learning presents new opportunities for exploiting these two sources of
information. To recommend items we propose to first learn a user-independent
high-dimensional semantic space in which items are positioned according to
their substitutability, and then learn a user-specific transformation function
to transform this space into a ranking according to the user's past
preferences. An advantage of the proposed architecture is that it can be used
to effectively recommend items using either content that describes the items or
user-item ratings. We show that this approach significantly outperforms
state-of-the-art recommender systems on the MovieLens 1M dataset.Comment: 6 pages, RecSys 2016 RSDL worksho
Attentive Neural Architecture Incorporating Song Features For Music Recommendation
Recommender Systems are an integral part of music sharing platforms. Often
the aim of these systems is to increase the time, the user spends on the
platform and hence having a high commercial value. The systems which aim at
increasing the average time a user spends on the platform often need to
recommend songs which the user might want to listen to next at each point in
time. This is different from recommendation systems which try to predict the
item which might be of interest to the user at some point in the user lifetime
but not necessarily in the very near future. Prediction of the next song the
user might like requires some kind of modeling of the user interests at the
given point of time. Attentive neural networks have been exploiting the
sequence in which the items were selected by the user to model the implicit
short-term interests of the user for the task of next item prediction, however
we feel that the features of the songs occurring in the sequence could also
convey some important information about the short-term user interest which only
the items cannot. In this direction, we propose a novel attentive neural
architecture which in addition to the sequence of items selected by the user,
uses the features of these items to better learn the user short-term
preferences and recommend the next song to the user.Comment: Accepted as a paper at the 12th ACM Conference on Recommender Systems
(RecSys 18
CoNet: Collaborative Cross Networks for Cross-Domain Recommendation
The cross-domain recommendation technique is an effective way of alleviating
the data sparse issue in recommender systems by leveraging the knowledge from
relevant domains. Transfer learning is a class of algorithms underlying these
techniques. In this paper, we propose a novel transfer learning approach for
cross-domain recommendation by using neural networks as the base model. In
contrast to the matrix factorization based cross-domain techniques, our method
is deep transfer learning, which can learn complex user-item interaction
relationships. We assume that hidden layers in two base networks are connected
by cross mappings, leading to the collaborative cross networks (CoNet). CoNet
enables dual knowledge transfer across domains by introducing cross connections
from one base network to another and vice versa. CoNet is achieved in
multi-layer feedforward networks by adding dual connections and joint loss
functions, which can be trained efficiently by back-propagation. The proposed
model is thoroughly evaluated on two large real-world datasets. It outperforms
baselines by relative improvements of 7.84\% in NDCG. We demonstrate the
necessity of adaptively selecting representations to transfer. Our model can
reduce tens of thousands training examples comparing with non-transfer methods
and still has the competitive performance with them.Comment: Deep transfer learning for recommender system
- …