30,953 research outputs found

    Personalized Purchase Prediction of Market Baskets with Wasserstein-Based Sequence Matching

    Full text link
    Personalization in marketing aims at improving the shopping experience of customers by tailoring services to individuals. In order to achieve this, businesses must be able to make personalized predictions regarding the next purchase. That is, one must forecast the exact list of items that will comprise the next purchase, i.e., the so-called market basket. Despite its relevance to firm operations, this problem has received surprisingly little attention in prior research, largely due to its inherent complexity. In fact, state-of-the-art approaches are limited to intuitive decision rules for pattern extraction. However, the simplicity of the pre-coded rules impedes performance, since decision rules operate in an autoregressive fashion: the rules can only make inferences from past purchases of a single customer without taking into account the knowledge transfer that takes place between customers. In contrast, our research overcomes the limitations of pre-set rules by contributing a novel predictor of market baskets from sequential purchase histories: our predictions are based on similarity matching in order to identify similar purchase habits among the complete shopping histories of all customers. Our contributions are as follows: (1) We propose similarity matching based on subsequential dynamic time warping (SDTW) as a novel predictor of market baskets. Thereby, we can effectively identify cross-customer patterns. (2) We leverage the Wasserstein distance for measuring the similarity among embedded purchase histories. (3) We develop a fast approximation algorithm for computing a lower bound of the Wasserstein distance in our setting. An extensive series of computational experiments demonstrates the effectiveness of our approach. The accuracy of identifying the exact market baskets based on state-of-the-art decision rules from the literature is outperformed by a factor of 4.0.Comment: Accepted for oral presentation at 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2019

    More Money in Their Pockets: Pragmatism, Politics and Poverty in Alberta

    Get PDF
    This paper explains why ESPC challenges Alberta to adopt the Market Basket Measure to set social assistance and minimum wage rates. The MBM is an equitable and practical tool for ensuring that low income Albertans can have their basic needs met. An executive summary of the report is also available

    A Product Affinity Segmentation Framework

    Get PDF
    Product affinity segmentation discovers the linking between customers and products for cross-selling and promotion opportunities to increase sales and profits. However, there are some challenges with conventional approaches. The most straightforward approach is to use the product-level data for customer segmentation, but it results in less meaningful solutions. Moreover, customer segmentation becomes challenging on massive datasets due to computational complexity of traditional clustering methods. As an alternative, market basket analysis may suffer from association rules too general to be relevant for important segments. In this paper, we propose to partition customers and discover associated products simultaneously by detecting communities in the customer-product bipartite graph using the Louvain algorithm that has good interpretability in this context. Through the post-clustering analysis, we show that this framework generates statistically distinct clusters and identifies associated products relevant for each cluster. Our analysis provides greater insights into customer purchase behaviors, potentially helping personalization strategic planning (e.g. customized product recommendation) and profitability increase. And our case study of a large U.S. retailer provides useful management insights. Moreover, the graph application, based on almost 800,000 sales transactions, finished in 7.5 seconds on a standard PC, demonstrating its computational efficiency and better facilitating the requirements of big data
    corecore