Defection Detection: Measuring and Understanding the Predictive Accuracy of Customer Churn Models

Abstract

The authors express their gratitude to Sanyin Siang (Managing Director, Teradata Center for Customer Relationship Management at the Fuqua School of Business, Duke University); research assistants Sarwat Husain, Michael Kurima, and Emilio del Rio; and an anonymous wireless telephone carrier that provided the data for this study. The authors also thank participants in the Tuck School of Business, Dart-mouth College, Marketing Workshop, for comments and the two anony-mous JMR reviewers for their constructive suggestions. Finally, the authors express their appreciation to former editor Dick Wittink (posthumously) for his invaluable insights and guidance. This article provides a descriptive analysis of how methodological factors contribute to the accuracy of customer churn predictive models. The study is based on a tournament in which both academics and practitioners downloaded data from a publicly available Web site, estimated a model, and made predictions on two validation databases. The results suggest several important findings. First, methods do matter. The differences observed in predictive accuracy across submissions could change the profitability of a churn management campaign by hundreds of thousands of dollars. Second, models have staying power. They suffer very little decrease in performance if they are used to predict churn for a database compiled three months after the calibration data. Third, researchers use a variety of modeling "approaches," characterized by variables such as estimation technique, variable selection procedure, number of variables included, and time allocated to steps in the model-building process. The authors find important differences in performance among these approaches and discuss implications for both researchers and practitioners

    Similar works