353 research outputs found
Quantitative analysis of Matthew effect and sparsity problem of recommender systems
Recommender systems have received great commercial success. Recommendation
has been used widely in areas such as e-commerce, online music FM, online news
portal, etc. However, several problems related to input data structure pose
serious challenge to recommender system performance. Two of these problems are
Matthew effect and sparsity problem. Matthew effect heavily skews recommender
system output towards popular items. Data sparsity problem directly affects the
coverage of recommendation result. Collaborative filtering is a simple
benchmark ubiquitously adopted in the industry as the baseline for recommender
system design. Understanding the underlying mechanism of collaborative
filtering is crucial for further optimization. In this paper, we do a thorough
quantitative analysis on Matthew effect and sparsity problem in the particular
context setting of collaborative filtering. We compare the underlying mechanism
of user-based and item-based collaborative filtering and give insight to
industrial recommender system builders
Biases in scholarly recommender systems: impact, prevalence, and mitigation
We create a simulated financial market and examine the effect of different levels of active and passive investment on fundamental market efficiency. In our simulated market, active, passive, and random investors interact with each other through issuing orders. Active and passive investors select their portfolio weights by optimizing Markowitz-based utility functions. We find that higher fractions of active investment within a market lead to an increased fundamental market efficiency. The marginal increase in fundamental market efficiency per additional active investor is lower in markets with higher levels of active investment. Furthermore, we find that a large fraction of passive investors within a market may facilitate technical price bubbles, resulting in market failure. By examining the effect of specific parameters on market outcomes, we find that that lower transaction costs, lower individual forecasting errors of active investors, and less restrictive portfolio constraints tend to increase fundamental market efficiency in the market
MatRec: Matrix Factorization for Highly Skewed Dataset
Recommender systems is one of the most successful AI technologies applied in
the internet cooperations. Popular internet products such as TikTok, Amazon,
and YouTube have all integrated recommender systems as their core product
feature. Although recommender systems have received great success, it is well
known for highly skewed datasets, engineers and researchers need to adjust
their methods to tackle the specific problem to yield good results. Inability
to deal with highly skewed dataset usually generates hard computational
problems for big data clusters and unsatisfactory results for customers. In
this paper, we propose a new algorithm solving the problem in the framework of
matrix factorization. We model the data skewness factors in the theoretic
modeling of the approach with easy to interpret and easy to implement formulas.
We prove in experiments our method generates comparably favorite results with
popular recommender system algorithms such as Learning to Rank , Alternating
Least Squares and Deep Matrix Factorization
Disentangled Variational Auto-encoder Enhanced by Counterfactual Data for Debiasing Recommendation
Recommender system always suffers from various recommendation biases,
seriously hindering its development. In this light, a series of debias methods
have been proposed in the recommender system, especially for two most common
biases, i.e., popularity bias and amplified subjective bias. However, exsisting
debias methods usually concentrate on correcting a single bias. Such
single-functionality debiases neglect the bias-coupling issue in which the
recommended items are collectively attributed to multiple biases. Besides,
previous work cannot tackle the lacking supervised signals brought by sparse
data, yet which has become a commonplace in the recommender system. In this
work, we introduce a disentangled debias variational auto-encoder
framework(DB-VAE) to address the single-functionality issue as well as a
counterfactual data enhancement method to mitigate the adverse effect due to
the data sparsity. In specific, DB-VAE first extracts two types of extreme
items only affected by a single bias based on the collier theory, which are
respectively employed to learn the latent representation of corresponding
biases, thereby realizing the bias decoupling. In this way, the exact unbiased
user representation can be learned by these decoupled bias representations.
Furthermore, the data generation module employs Pearl's framework to produce
massive counterfactual data, making up the lacking supervised signals due to
the sparse data. Extensive experiments on three real-world datasets demonstrate
the effectiveness of our proposed model. Besides, the counterfactual data can
further improve DB-VAE, especially on the dataset with low sparsity
Making Neural Networks Interpretable with Attribution: Application to Implicit Signals Prediction
Explaining recommendations enables users to understand whether recommended
items are relevant to their needs and has been shown to increase their trust in
the system. More generally, if designing explainable machine learning models is
key to check the sanity and robustness of a decision process and improve their
efficiency, it however remains a challenge for complex architectures,
especially deep neural networks that are often deemed "black-box". In this
paper, we propose a novel formulation of interpretable deep neural networks for
the attribution task. Differently to popular post-hoc methods, our approach is
interpretable by design. Using masked weights, hidden features can be deeply
attributed, split into several input-restricted sub-networks and trained as a
boosted mixture of experts. Experimental results on synthetic data and
real-world recommendation tasks demonstrate that our method enables to build
models achieving close predictive performances to their non-interpretable
counterparts, while providing informative attribution interpretations.Comment: 14th ACM Conference on Recommender Systems (RecSys '20
Learning the Structure of Auto-Encoding Recommenders
Autoencoder recommenders have recently shown state-of-the-art performance in
the recommendation task due to their ability to model non-linear item
relationships effectively. However, existing autoencoder recommenders use
fully-connected neural network layers and do not employ structure learning.
This can lead to inefficient training, especially when the data is sparse as
commonly found in collaborative filtering. The aforementioned results in lower
generalization ability and reduced performance. In this paper, we introduce
structure learning for autoencoder recommenders by taking advantage of the
inherent item groups present in the collaborative filtering domain. Due to the
nature of items in general, we know that certain items are more related to each
other than to other items. Based on this, we propose a method that first learns
groups of related items and then uses this information to determine the
connectivity structure of an auto-encoding neural network. This results in a
network that is sparsely connected. This sparse structure can be viewed as a
prior that guides the network training. Empirically we demonstrate that the
proposed structure learning enables the autoencoder to converge to a local
optimum with a much smaller spectral norm and generalization error bound than
the fully-connected network. The resultant sparse network considerably
outperforms the state-of-the-art methods like \textsc{Mult-vae/Mult-dae} on
multiple benchmarked datasets even when the same number of parameters and flops
are used. It also has a better cold-start performance.Comment: Proceedings of The Web Conference 202
Neural Networks for Personalized Recommender Systems
The recommender system is an essential tool for companies and users. A successful recommender system not only can help companies promote their products and services, but also benefit users by filtering out unwanted information. Thus, recommender systems are growing to be indispensable in a wide range of industries. Moreover, due to the fact that neural networks have been proved to be efficient and scalable, they are widely studied and applied to various fields. This thesis aims at developing methods for recommender systems by adapting neural networks. By exploring to adapt neural networks to recommender systems, this thesis investigates challenges that recommender systems are facing, and presents approaches to these challenges. Specifically, these challenges include: (1) data sparsity, (2) the complex relationships between users and items, (3) dynamic user preferences.
To address the data sparsity, this thesis proposes to learn both collaborative features and content representations to generate recommendations in case of sparse data. Moreover, it proposes an architecture for the training process to further improve the quality of recommendations. To dynamically learn users' preferences, the thesis proposes to learn temporal features to capture dynamic changes of users' preferences. In this way, both the users' general preferences and the latest interactions are considered. To learn the complex relationships, this thesis also proposes a geometric method to measure nonlinear metric to learn the complex relationship among users and items. Moreover, the relationships between items are also considered to avoid potential problems
- …