410 research outputs found
Enhancing CTR Prediction with Context-Aware Feature Representation Learning
CTR prediction has been widely used in the real world. Many methods model
feature interaction to improve their performance. However, most methods only
learn a fixed representation for each feature without considering the varying
importance of each feature under different contexts, resulting in inferior
performance. Recently, several methods tried to learn vector-level weights for
feature representations to address the fixed representation issue. However,
they only produce linear transformations to refine the fixed feature
representations, which are still not flexible enough to capture the varying
importance of each feature under different contexts. In this paper, we propose
a novel module named Feature Refinement Network (FRNet), which learns
context-aware feature representations at bit-level for each feature in
different contexts. FRNet consists of two key components: 1) Information
Extraction Unit (IEU), which captures contextual information and cross-feature
relationships to guide context-aware feature refinement; and 2) Complementary
Selection Gate (CSGate), which adaptively integrates the original and
complementary feature representations learned in IEU with bit-level weights.
Notably, FRNet is orthogonal to existing CTR methods and thus can be applied in
many existing methods to boost their performance. Comprehensive experiments are
conducted to verify the effectiveness, efficiency, and compatibility of FRNet.Comment: SIGIR 202
Learning from Multi-View Multi-Way Data via Structural Factorization Machines
Real-world relations among entities can often be observed and determined by
different perspectives/views. For example, the decision made by a user on
whether to adopt an item relies on multiple aspects such as the contextual
information of the decision, the item's attributes, the user's profile and the
reviews given by other users. Different views may exhibit multi-way
interactions among entities and provide complementary information. In this
paper, we introduce a multi-tensor-based approach that can preserve the
underlying structure of multi-view data in a generic predictive model.
Specifically, we propose structural factorization machines (SFMs) that learn
the common latent spaces shared by multi-view tensors and automatically adjust
the importance of each view in the predictive model. Furthermore, the
complexity of SFMs is linear in the number of parameters, which make SFMs
suitable to large-scale problems. Extensive experiments on real-world datasets
demonstrate that the proposed SFMs outperform several state-of-the-art methods
in terms of prediction accuracy and computational cost.Comment: 10 page
AutoAttention: Automatic Field Pair Selection for Attention in User Behavior Modeling
In Click-through rate (CTR) prediction models, a user's interest is usually
represented as a fixed-length vector based on her history behaviors. Recently,
several methods are proposed to learn an attentive weight for each user
behavior and conduct weighted sum pooling. However, these methods only manually
select several fields from the target item side as the query to interact with
the behaviors, neglecting the other target item fields, as well as user and
context fields. Directly including all these fields in the attention may
introduce noise and deteriorate the performance. In this paper, we propose a
novel model named AutoAttention, which includes all item/user/context side
fields as the query, and assigns a learnable weight for each field pair between
behavior fields and query fields. Pruning on these field pairs via these
learnable weights lead to automatic field pair selection, so as to identify and
remove noisy field pairs. Though including more fields, the computation cost of
AutoAttention is still low due to using a simple attention function and field
pair selection. Extensive experiments on the public dataset and Tencent's
production dataset demonstrate the effectiveness of the proposed approach.Comment: Accepted by ICDM 202
- …