3 research outputs found
Conformative Filtering for Implicit Feedback Data
Implicit feedback is the simplest form of user feedback that can be used for
item recommendation. It is easy to collect and is domain independent. However,
there is a lack of negative examples. Previous work tackles this problem by
assuming that users are not interested or not as much interested in the
unconsumed items. Those assumptions are often severely violated since
non-consumption can be due to factors like unawareness or lack of resources.
Therefore, non-consumption by a user does not always mean disinterest or
irrelevance. In this paper, we propose a novel method called Conformative
Filtering (CoF) to address the issue. The motivating observation is that if
there is a large group of users who share the same taste and none of them have
consumed an item before, then it is likely that the item is not of interest to
the group. We perform multidimensional clustering on implicit feedback data
using hierarchical latent tree analysis (HLTA) to identify user `tastes' groups
and make recommendations for a user based on her memberships in the groups and
on the past behavior of the groups. Experiments on two real-world datasets from
different domains show that CoF has superior performance compared to several
common baselines
Learning the Structure of Auto-Encoding Recommenders
Autoencoder recommenders have recently shown state-of-the-art performance in
the recommendation task due to their ability to model non-linear item
relationships effectively. However, existing autoencoder recommenders use
fully-connected neural network layers and do not employ structure learning.
This can lead to inefficient training, especially when the data is sparse as
commonly found in collaborative filtering. The aforementioned results in lower
generalization ability and reduced performance. In this paper, we introduce
structure learning for autoencoder recommenders by taking advantage of the
inherent item groups present in the collaborative filtering domain. Due to the
nature of items in general, we know that certain items are more related to each
other than to other items. Based on this, we propose a method that first learns
groups of related items and then uses this information to determine the
connectivity structure of an auto-encoding neural network. This results in a
network that is sparsely connected. This sparse structure can be viewed as a
prior that guides the network training. Empirically we demonstrate that the
proposed structure learning enables the autoencoder to converge to a local
optimum with a much smaller spectral norm and generalization error bound than
the fully-connected network. The resultant sparse network considerably
outperforms the state-of-the-art methods like \textsc{Mult-vae/Mult-dae} on
multiple benchmarked datasets even when the same number of parameters and flops
are used. It also has a better cold-start performance.Comment: Proceedings of The Web Conference 202