21,092 research outputs found

    Neural Networks for CollaborativeFiltering

    Get PDF
    Recommender systems are an integral part of almost all modern e-commerce companies. They contribute significantly to the overall customer satisfaction by helping the user discover new and relevant items, which consequently leads to higher sales and stronger customer retention. It is, therefore, not surprising that large e-commerce shops like Amazon or streaming platforms like Netflix and Spotify even use multiple recommender systems to further increase user engagement. Finding the most relevant items for each user is a difficult task that is critically dependent on the available user feedback information. However, most users typically interact with products only through noisy implicit feedback, such as clicks or purchases, rather than providing explicit information about their preferences, such as product ratings. This usually makes large amounts of behavioural user data necessary to infer accurate user preferences. One popular approach to make the most use of both forms of feedback is called collaborative filtering. Here, the main idea is to compare individual user behaviour with the behaviour of all known users. Although there are many different collaborative filtering techniques, matrix factorization models are among the most successful ones. In contrast, while neural networks are nowadays the state-of-the-art method for tasks such as image recognition or natural language processing, they are still not very popular for collaborative filtering tasks. Therefore, the main focus of this thesis is the derivation of multiple wide neural network architectures to mimic and extend matrix factorization models for various collaborative filtering problems and to gain insights into the connection between these models. The basics of the proposed architecture are wide and shallow feedforward neural networks, which will be established for rating prediction tasks on explicit feedback datasets. These networks consist of large input and output layers, which allow them to capture user and item representation similar to matrix factorization models. By deriving all weight updates and comparing the structure of both models, it is proven that a simplified version of the proposed network can mimic common matrix factorization models: a result that has not been shown, as far as we know, in this form before. Additionally, various extensions are thoroughly evaluated. The new findings of this evaluation can also easily be transferred to other matrix factorization models. This neural network architecture can be extended to be used for personalized ranking tasks on implicit feedback datasets. For these problems, it is necessary to rank products according to individual preferences using only the provided implicit feedback. One of the most successful and influential approaches for personalized ranking tasks is Bayesian Personalized Ranking, which attempts to learn pairwise item rankings and can also be used in combination with matrix factorization models. It is shown, how the introduction of an additional ranking layer forces the network to learn pairwise item rankings. In addition, similarities between this novel neural network architecture and a matrix factorization model trained with Bayesian Personalized Ranking are proven. To the best of our knowledge, this is the first time that these connections have been shown. The state-of-the-art performance of this network is demonstrated in a detailed evaluation. The most comprehensive feedback datasets consist of a mixture of explicit as well as implicit feedback information. Here, the goal is to predict if a user will like an item, similar to rating prediction tasks, even if this user has never given any explicit feedback at all: a problem, that has not been covered by the collaborative filtering literature yet. The network to solve this task is composed out of two networks: one for the explicit and one for the implicit feedback. Additional item features are learned using the implicit feedback, which capture all information necessary to rank items. Afterwards, these features are used to improve the explicit feedback prediction. Both parts of this combined network have different optimization goals, are trained simultaneously and, therefore, influence each other. A detailed evaluation shows that this approach is helpful to improve the network's overall predictive performance especially for ranking metrics

    Neural Collaborative Filtering

    Full text link
    In recent years, deep neural networks have yielded immense success on speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks on recommender systems has received relatively less scrutiny. In this work, we strive to develop techniques based on neural networks to tackle the key problem in recommendation -- collaborative filtering -- on the basis of implicit feedback. Although some recent work has employed deep learning for recommendation, they primarily used it to model auxiliary information, such as textual descriptions of items and acoustic features of musics. When it comes to model the key factor in collaborative filtering -- the interaction between user and item features, they still resorted to matrix factorization and applied an inner product on the latent features of users and items. By replacing the inner product with a neural architecture that can learn an arbitrary function from data, we present a general framework named NCF, short for Neural network-based Collaborative Filtering. NCF is generic and can express and generalize matrix factorization under its framework. To supercharge NCF modelling with non-linearities, we propose to leverage a multi-layer perceptron to learn the user-item interaction function. Extensive experiments on two real-world datasets show significant improvements of our proposed NCF framework over the state-of-the-art methods. Empirical evidence shows that using deeper layers of neural networks offers better recommendation performance.Comment: 10 pages, 7 figure

    Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking

    Full text link
    This paper proposes a new neural architecture for collaborative ranking with implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning}) is a novel metric learning approach for recommendation. More specifically, instead of simple push-pull mechanisms between user and item pairs, we propose to learn latent relations that describe each user item interaction. This helps to alleviate the potential geometric inflexibility of existing metric learing approaches. This enables not only better performance but also a greater extent of modeling capability, allowing our model to scale to a larger number of interactions. In order to do so, we employ a augmented memory module and learn to attend over these memory blocks to construct latent relations. The memory-based attention module is controlled by the user-item interaction, making the learned relation vector specific to each user-item pair. Hence, this can be interpreted as learning an exclusive and optimal relational translation for each user-item interaction. The proposed architecture demonstrates the state-of-the-art performance across multiple recommendation benchmarks. LRML outperforms other metric learning models by 6%−7.5%6\%-7.5\% in terms of Hits@10 and nDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover, qualitative studies also demonstrate evidence that our proposed model is able to infer and encode explicit sentiment, temporal and attribute information despite being only trained on implicit feedback. As such, this ascertains the ability of LRML to uncover hidden relational structure within implicit datasets.Comment: WWW 201

    NAIS: Neural Attentive Item Similarity Model for Recommendation

    Full text link
    Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user's profile as her historically interacted items, recommending new items that are similar to the user's profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems

    Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation

    Full text link
    The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems. Historically, cross-domain co-clustering methods have successfully leveraged shared users and items across dense and sparse domains to improve inference quality. However, they rely on shared rating data and cannot scale to multiple sparse target domains (i.e., the one-to-many transfer setting). This, combined with the increasing adoption of neural recommender architectures, motivates us to develop scalable neural layer-transfer approaches for cross-domain learning. Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains, improving the user and item representations learned in the sparse domains. We leverage contextual invariances across domains to develop these shared modules, and demonstrate that with user-item interaction context, we can learn-to-learn informative representation spaces even with sparse interaction data. We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa, a global payments technology company (19% Item Recall, 3x faster vs. training separate models for each domain). Our approach is applicable to both implicit and explicit feedback settings.Comment: SIGIR 202

    Deep Item-based Collaborative Filtering for Top-N Recommendation

    Full text link
    Item-based Collaborative Filtering(short for ICF) has been widely adopted in recommender systems in industry, owing to its strength in user interest modeling and ease in online personalization. By constructing a user's profile with the items that the user has consumed, ICF recommends items that are similar to the user's profile. With the prevalence of machine learning in recent years, significant processes have been made for ICF by learning item similarity (or representation) from data. Nevertheless, we argue that most existing works have only considered linear and shallow relationship between items, which are insufficient to capture the complicated decision-making process of users. In this work, we propose a more expressive ICF solution by accounting for the nonlinear and higher-order relationship among items. Going beyond modeling only the second-order interaction (e.g. similarity) between two items, we additionally consider the interaction among all interacted item pairs by using nonlinear neural networks. Through this way, we can effectively model the higher-order relationship among items, capturing more complicated effects in user decision-making. For example, it can differentiate which historical itemsets in a user's profile are more important in affecting the user to make a purchase decision on an item. We treat this solution as a deep variant of ICF, thus term it as DeepICF. To justify our proposal, we perform empirical studies on two public datasets from MovieLens and Pinterest. Extensive experiments verify the highly positive effect of higher-order item interaction modeling with nonlinear neural networks. Moreover, we demonstrate that by more fine-grained second-order interaction modeling with attention network, the performance of our DeepICF method can be further improved.Comment: 25 pages, submitted to TOI
    • …
    corecore