8 research outputs found

    Feature Selection Methods for Uplift Modeling

    Full text link
    Uplift modeling is a predictive modeling technique that estimates the user-level incremental effect of a treatment using machine learning models. It is often used for targeting promotions and advertisements, as well as for the personalization of product offerings. In these applications, there are often hundreds of features available to build such models. Keeping all the features in a model can be costly and inefficient. Feature selection is an essential step in the modeling process for multiple reasons: improving the estimation accuracy by eliminating irrelevant features, accelerating model training and prediction speed, reducing the monitoring and maintenance workload for feature data pipeline, and providing better model interpretation and diagnostics capability. However, feature selection methods for uplift modeling have been rarely discussed in the literature. Although there are various feature selection methods for standard machine learning models, we will demonstrate that those methods are sub-optimal for solving the feature selection problem for uplift modeling. To address this problem, we introduce a set of feature selection methods designed specifically for uplift modeling, including both filter methods and embedded methods. To evaluate the effectiveness of the proposed feature selection methods, we use different uplift models and measure the accuracy of each model with a different number of selected features. We use both synthetic and real data to conduct these experiments. We also implemented the proposed filter methods in an open source Python package (CausalML)

    Normal Causes for Normal Effects

    Get PDF
    Halpern and Hitchcock have used normality considerations in order to provide an analysis of actual causation. Their methodology is that of taking a set of causal scenarios and showing how their account of actual causation accords with typical judgments about those scenarios. Consequently, Halpern and Hitchcock have recently demonstrated that their theory deals with an impressive number of problem cases discussed in the literature. However, in this paper I first show that the way in which they rule out certain cases of bogus prevention leaves their account susceptible to counterexamples. I then sketch an alternative approach to prevention scenarios which draws on the observation that, in addition to abnormal causes, people usually tend to focus on abnormal effects

    Mutual Manipulability and Causal Inbetweenness

    Get PDF
    Carl Craver's mutual manipulability criterion aims to pick out all and only those components of a mechanism that are constitutively relevant with respect to a given phenomenon. In devising his criterion, Craver has made heavy use of the notion of an ideal intervention, which is a tool for illuminating causal concepts in causal models. The problem is that typical mechanistic models contain non-causal relations in addition to causal ones, and so the question as to the applicability of ideal interventions arises. In this paper, I first show why top-down interventions in mechanistic models are likely to violate the standard conditions for ideal interventions under two familiar metaphysics of mechanistic models: those based on supervenience and realization. Drawing from recent developments in the causal exclusion literature, I then argue for the appropriateness of an extended notion of an ideal intervention. Finally, I show why adopting such an extended notion leads to the surprising consequence that an important subset of mechanistic interlevel relations come out as causal. I call the resulting metaphysical account by the name `causal inbetweenness'
    corecore