8,400 research outputs found
MLI: An API for Distributed Machine Learning
MLI is an Application Programming Interface designed to address the
challenges of building Machine Learn- ing algorithms in a distributed setting
based on data-centric computing. Its primary goal is to simplify the
development of high-performance, scalable, distributed algorithms. Our initial
results show that, relative to existing systems, this interface can be used to
build distributed implementations of a wide variety of common Machine Learning
algorithms with minimal complexity and highly competitive performance and
scalability
Neural Collaborative Filtering
In recent years, deep neural networks have yielded immense success on speech
recognition, computer vision and natural language processing. However, the
exploration of deep neural networks on recommender systems has received
relatively less scrutiny. In this work, we strive to develop techniques based
on neural networks to tackle the key problem in recommendation -- collaborative
filtering -- on the basis of implicit feedback. Although some recent work has
employed deep learning for recommendation, they primarily used it to model
auxiliary information, such as textual descriptions of items and acoustic
features of musics. When it comes to model the key factor in collaborative
filtering -- the interaction between user and item features, they still
resorted to matrix factorization and applied an inner product on the latent
features of users and items. By replacing the inner product with a neural
architecture that can learn an arbitrary function from data, we present a
general framework named NCF, short for Neural network-based Collaborative
Filtering. NCF is generic and can express and generalize matrix factorization
under its framework. To supercharge NCF modelling with non-linearities, we
propose to leverage a multi-layer perceptron to learn the user-item interaction
function. Extensive experiments on two real-world datasets show significant
improvements of our proposed NCF framework over the state-of-the-art methods.
Empirical evidence shows that using deeper layers of neural networks offers
better recommendation performance.Comment: 10 pages, 7 figure
FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
Advertising and feed ranking are essential to many Internet companies such as
Facebook and Sina Weibo. Among many real-world advertising and feed ranking
systems, click through rate (CTR) prediction plays a central role. There are
many proposed models in this field such as logistic regression, tree based
models, factorization machine based models and deep learning based CTR models.
However, many current works calculate the feature interactions in a simple way
such as Hadamard product and inner product and they care less about the
importance of features. In this paper, a new model named FiBiNET as an
abbreviation for Feature Importance and Bilinear feature Interaction NETwork is
proposed to dynamically learn the feature importance and fine-grained feature
interactions. On the one hand, the FiBiNET can dynamically learn the importance
of features via the Squeeze-Excitation network (SENET) mechanism; on the other
hand, it is able to effectively learn the feature interactions via bilinear
function. We conduct extensive experiments on two real-world datasets and show
that our shallow model outperforms other shallow models such as factorization
machine(FM) and field-aware factorization machine(FFM). In order to improve
performance further, we combine a classical deep neural network(DNN) component
with the shallow model to be a deep model. The deep FiBiNET consistently
outperforms the other state-of-the-art deep models such as DeepFM and extreme
deep factorization machine(XdeepFM).Comment: 8 pages,5 figure
- …