573 research outputs found
Customer Lifetime Value Prediction Using Embeddings
We describe the Customer LifeTime Value (CLTV) prediction system deployed at ASOS.com, a global online fashion retailer. CLTV prediction is an important problem in e-commerce where an accurate estimate of future value allows retailers to effectively allocate marketing spend, identify and nurture high value customers and mitigate exposure to losses. The system at ASOS provides daily estimates of the future value of every customer and is one of the cornerstones of the personalised shopping experience. The state of the art in this domain uses large numbers of handcrafted features and ensemble regressors to forecast value, predict churn and evaluate customer loyalty. Recently, domains including language, vision and speech have shown dramatic advances by replacing handcrafted features with features that are learned automatically from data. We detail the system deployed at ASOS and show that learning feature representations is a promising extension to the state of the art in CLTV modelling. We propose a novel way to generate embeddings of customers, which addresses the issue of the ever changing product catalogue and obtain a significant improvement over an exhaustive set of handcrafted features
CASPR: Customer Activity Sequence-based Prediction and Representation
Tasks critical to enterprise profitability, such as customer churn
prediction, fraudulent account detection or customer lifetime value estimation,
are often tackled by models trained on features engineered from customer data
in tabular format. Application-specific feature engineering adds development,
operationalization and maintenance costs over time. Recent advances in
representation learning present an opportunity to simplify and generalize
feature engineering across applications. When applying these advancements to
tabular data researchers deal with data heterogeneity, variations in customer
engagement history or the sheer volume of enterprise datasets. In this paper,
we propose a novel approach to encode tabular data containing customer
transactions, purchase history and other interactions into a generic
representation of a customer's association with the business. We then evaluate
these embeddings as features to train multiple models spanning a variety of
applications. CASPR, Customer Activity Sequence-based Prediction and
Representation, applies Transformer architecture to encode activity sequences
to improve model performance and avoid bespoke feature engineering across
applications. Our experiments at scale validate CASPR for both small and large
enterprise applications.Comment: Presented at the Table Representation Learning Workshop, NeurIPS
2022, New Orleans. Authors listed in random orde
Out of the Box Thinking: Improving Customer Lifetime Value Modelling via Expert Routing and Game Whale Detection
Customer lifetime value (LTV) prediction is essential for mobile game
publishers trying to optimize the advertising investment for each user
acquisition based on the estimated worth. In mobile games, deploying
microtransactions is a simple yet effective monetization strategy, which
attracts a tiny group of game whales who splurge on in-game purchases. The
presence of such game whales may impede the practicality of existing LTV
prediction models, since game whales' purchase behaviours always exhibit varied
distribution from general users. Consequently, identifying game whales can open
up new opportunities to improve the accuracy of LTV prediction models. However,
little attention has been paid to applying game whale detection in LTV
prediction, and existing works are mainly specialized for the long-term LTV
prediction with the assumption that the high-quality user features are
available, which is not applicable in the UA stage. In this paper, we propose
ExpLTV, a novel multi-task framework to perform LTV prediction and game whale
detection in a unified way. In ExpLTV, we first innovatively design a deep
neural network-based game whale detector that can not only infer the intrinsic
order in accordance with monetary value, but also precisely identify high
spenders (i.e., game whales) and low spenders. Then, by treating the game whale
detector as a gating network to decide the different mixture patterns of LTV
experts assembling, we can thoroughly leverage the shared information and
scenario-specific information (i.e., game whales modelling and low spenders
modelling). Finally, instead of separately designing a purchase rate estimator
for two tasks, we design a shared estimator that can preserve the inner task
relationships. The superiority of ExpLTV is further validated via extensive
experiments on three industrial datasets
A Meta-learning based Stacked Regression Approach for Customer Lifetime Value Prediction
Companies across the globe are keen on targeting potential high-value
customers in an attempt to expand revenue and this could be achieved only by
understanding the customers more. Customer Lifetime Value (CLV) is the total
monetary value of transactions/purchases made by a customer with the business
over an intended period of time and is used as means to estimate future
customer interactions. CLV finds application in a number of distinct business
domains such as Banking, Insurance, Online-entertainment, Gaming, and
E-Commerce. The existing distribution-based and basic (recency, frequency &
monetary) based models face a limitation in terms of handling a wide variety of
input features. Moreover, the more advanced Deep learning approaches could be
superfluous and add an undesirable element of complexity in certain application
areas. We, therefore, propose a system which is able to qualify both as
effective, and comprehensive yet simple and interpretable. With that in mind,
we develop a meta-learning-based stacked regression model which combines the
predictions from bagging and boosting models that each is found to perform well
individually. Empirical tests have been carried out on an openly available
Online Retail dataset to evaluate various models and show the efficacy of the
proposed approach.Comment: 11 pages, 7 figure
- …