7,017 research outputs found
Ensembled CTR Prediction via Knowledge Distillation
Recently, deep learning-based models have been widely studied for
click-through rate (CTR) prediction and lead to improved prediction accuracy in
many industrial applications. However, current research focuses primarily on
building complex network architectures to better capture sophisticated feature
interactions and dynamic user behaviors. The increased model complexity may
slow down online inference and hinder its adoption in real-time applications.
Instead, our work targets at a new model training strategy based on knowledge
distillation (KD). KD is a teacher-student learning framework to transfer
knowledge learned from a teacher model to a student model. The KD strategy not
only allows us to simplify the student model as a vanilla DNN model but also
achieves significant accuracy improvements over the state-of-the-art teacher
models. The benefits thus motivate us to further explore the use of a powerful
ensemble of teachers for more accurate student model training. We also propose
some novel techniques to facilitate ensembled CTR prediction, including teacher
gating and early stopping by distillation loss. We conduct comprehensive
experiments against 12 existing models and across three industrial datasets.
Both offline and online A/B testing results show the effectiveness of our
KD-based training strategy.Comment: Published in CIKM'202
FinalMLP: An Enhanced Two-Stream MLP Model for CTR Prediction
Click-through rate (CTR) prediction is one of the fundamental tasks for
online advertising and recommendation. While multi-layer perceptron (MLP)
serves as a core component in many deep CTR prediction models, it has been
widely recognized that applying a vanilla MLP network alone is inefficient in
learning multiplicative feature interactions. As such, many two-stream
interaction models (e.g., DeepFM and DCN) have been proposed by integrating an
MLP network with another dedicated network for enhanced CTR prediction. As the
MLP stream learns feature interactions implicitly, existing research focuses
mainly on enhancing explicit feature interactions in the complementary stream.
In contrast, our empirical study shows that a well-tuned two-stream MLP model
that simply combines two MLPs can even achieve surprisingly good performance,
which has never been reported before by existing work. Based on this
observation, we further propose feature gating and interaction aggregation
layers that can be easily plugged to make an enhanced two-stream MLP model,
FinalMLP. In this way, it not only enables differentiated feature inputs but
also effectively fuses stream-level interactions across two streams. Our
evaluation results on four open benchmark datasets as well as an online A/B
test in our industrial system show that FinalMLP achieves better performance
than many sophisticated two-stream CTR models. Our source code will be
available at MindSpore/models.Comment: Accepted by AAAI 2023. Code available at
https://xpai.github.io/FinalML
Deep Character-Level Click-Through Rate Prediction for Sponsored Search
Predicting the click-through rate of an advertisement is a critical component
of online advertising platforms. In sponsored search, the click-through rate
estimates the probability that a displayed advertisement is clicked by a user
after she submits a query to the search engine. Commercial search engines
typically rely on machine learning models trained with a large number of
features to make such predictions. This is inevitably requires a lot of
engineering efforts to define, compute, and select the appropriate features. In
this paper, we propose two novel approaches (one working at character level and
the other working at word level) that use deep convolutional neural networks to
predict the click-through rate of a query-advertisement pair. Specially, the
proposed architectures only consider the textual content appearing in a
query-advertisement pair as input, and produce as output a click-through rate
prediction. By comparing the character-level model with the word-level model,
we show that language representation can be learnt from scratch at character
level when trained on enough data. Through extensive experiments using billions
of query-advertisement pairs of a popular commercial search engine, we
demonstrate that both approaches significantly outperform a baseline model
built on well-selected text features and a state-of-the-art word2vec-based
approach. Finally, by combining the predictions of the deep models introduced
in this study with the prediction of the model in production of the same
commercial search engine, we significantly improve the accuracy and the
calibration of the click-through rate prediction of the production system.Comment: SIGIR2017, 10 page
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Learning sophisticated feature interactions behind user behaviors is critical
in maximizing CTR for recommender systems. Despite great progress, existing
methods seem to have a strong bias towards low- or high-order interactions, or
require expertise feature engineering. In this paper, we show that it is
possible to derive an end-to-end learning model that emphasizes both low- and
high-order feature interactions. The proposed model, DeepFM, combines the power
of factorization machines for recommendation and deep learning for feature
learning in a new neural network architecture. Compared to the latest Wide \&
Deep model from Google, DeepFM has a shared input to its "wide" and "deep"
parts, with no need of feature engineering besides raw features. Comprehensive
experiments are conducted to demonstrate the effectiveness and efficiency of
DeepFM over the existing models for CTR prediction, on both benchmark data and
commercial data
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
In a variety of application domains the content to be recommended to users is
associated with text. This includes research papers, movies with associated
plot summaries, news articles, blog posts, etc. Recommendation approaches based
on latent factor models can be extended naturally to leverage text by employing
an explicit mapping from text to factors. This enables recommendations for new,
unseen content, and may generalize better, since the factors for all items are
produced by a compactly-parametrized model. Previous work has used topic models
or averages of word embeddings for this mapping. In this paper we present a
method leveraging deep recurrent neural networks to encode the text sequence
into a latent vector, specifically gated recurrent units (GRUs) trained
end-to-end on the collaborative filtering task. For the task of scientific
paper recommendation, this yields models with significantly higher accuracy. In
cold-start scenarios, we beat the previous state-of-the-art, all of which
ignore word order. Performance is further improved by multi-task learning,
where the text encoder network is trained for a combination of content
recommendation and item metadata prediction. This regularizes the collaborative
filtering model, ameliorating the problem of sparsity of the observed rating
matrix.Comment: 8 page
FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
Advertising and feed ranking are essential to many Internet companies such as
Facebook and Sina Weibo. Among many real-world advertising and feed ranking
systems, click through rate (CTR) prediction plays a central role. There are
many proposed models in this field such as logistic regression, tree based
models, factorization machine based models and deep learning based CTR models.
However, many current works calculate the feature interactions in a simple way
such as Hadamard product and inner product and they care less about the
importance of features. In this paper, a new model named FiBiNET as an
abbreviation for Feature Importance and Bilinear feature Interaction NETwork is
proposed to dynamically learn the feature importance and fine-grained feature
interactions. On the one hand, the FiBiNET can dynamically learn the importance
of features via the Squeeze-Excitation network (SENET) mechanism; on the other
hand, it is able to effectively learn the feature interactions via bilinear
function. We conduct extensive experiments on two real-world datasets and show
that our shallow model outperforms other shallow models such as factorization
machine(FM) and field-aware factorization machine(FFM). In order to improve
performance further, we combine a classical deep neural network(DNN) component
with the shallow model to be a deep model. The deep FiBiNET consistently
outperforms the other state-of-the-art deep models such as DeepFM and extreme
deep factorization machine(XdeepFM).Comment: 8 pages,5 figure
- …