8 research outputs found
Feature Selection Methods for Uplift Modeling
Uplift modeling is a predictive modeling technique that estimates the
user-level incremental effect of a treatment using machine learning models. It
is often used for targeting promotions and advertisements, as well as for the
personalization of product offerings. In these applications, there are often
hundreds of features available to build such models. Keeping all the features
in a model can be costly and inefficient. Feature selection is an essential
step in the modeling process for multiple reasons: improving the estimation
accuracy by eliminating irrelevant features, accelerating model training and
prediction speed, reducing the monitoring and maintenance workload for feature
data pipeline, and providing better model interpretation and diagnostics
capability. However, feature selection methods for uplift modeling have been
rarely discussed in the literature. Although there are various feature
selection methods for standard machine learning models, we will demonstrate
that those methods are sub-optimal for solving the feature selection problem
for uplift modeling. To address this problem, we introduce a set of feature
selection methods designed specifically for uplift modeling, including both
filter methods and embedded methods. To evaluate the effectiveness of the
proposed feature selection methods, we use different uplift models and measure
the accuracy of each model with a different number of selected features. We use
both synthetic and real data to conduct these experiments. We also implemented
the proposed filter methods in an open source Python package (CausalML)
Normal Causes for Normal Effects
Halpern and Hitchcock have used normality considerations in order to provide an analysis of actual causation. Their methodology is that of taking a set of causal scenarios and showing how their account of actual causation accords with typical judgments about those scenarios. Consequently, Halpern and Hitchcock have recently demonstrated that their theory deals with an impressive number of problem cases discussed in the literature. However, in this paper I first show that the way in which they rule out certain cases of bogus prevention leaves their account susceptible to counterexamples. I then sketch an alternative approach to prevention scenarios which draws on the observation that, in addition to abnormal causes, people usually tend to focus
on abnormal effects
Mutual Manipulability and Causal Inbetweenness
Carl Craver's mutual manipulability criterion aims to pick out all and only those components of a mechanism that are constitutively relevant with respect to a given phenomenon. In devising his criterion, Craver has made heavy use of the notion of an ideal intervention, which is a tool for illuminating causal concepts in causal models. The problem is that typical mechanistic models contain non-causal relations in addition to causal ones, and so the question as to the applicability of ideal interventions arises. In this paper, I first show why top-down interventions in mechanistic models are likely to violate the standard conditions for ideal interventions under two familiar metaphysics of mechanistic models: those based on supervenience and realization. Drawing from recent developments in the causal exclusion literature, I then argue for the appropriateness of an extended notion of an ideal intervention. Finally, I show why adopting such an extended notion leads to the surprising consequence that an important subset of mechanistic interlevel relations come out as causal. I call the resulting metaphysical account by the name `causal inbetweenness'
Recommended from our members
Can Behavioral Experts Predict Outcome Heterogeneity?
Recently Milkman et al. (2021) reported that heterogeneity of outcomes is extremely difficult to predict in advance and that behavioral experts, despite their educational background or their extensive applied research experience, might not be better in their predictions than non-experts. Such findings are rather concerning and deserve more attention. In the current project we present three studies aimed at assessing the quality of human predictions in respect to outcome heterogeneity of behavioral interventions. We borrow methods from anthropology and cognitive psychology to asses both consensus and accuracy of such predictions as well as to evaluate the potential role of behavioral expertise. Our results are more optimistic than previous findings and reveal that experts’ predictions are significantly better than a chance level. These findings have implications for applied behavioral research and further advance our theoretical understanding of expertise
Recommended from our members
Familiarity plays a unique role in increasing preferences for battery electric vehicle adoption
Battery electric vehicles (BEVs) play an important role in efforts to reduce carbon emissions but widespread adoption is hindered by people's perceptions of BEVs. Here we examine the role of familiarity in influencing preferences for BEVs. Using a US-based survey, we measured people's familiarity with BEVs, BEV beliefs, belief uncertainty, and perceived barriers and measured how these cognitive factors influence preferences. We first find that familiarity increases BEV preferences independent of its effect through other factors. Second, exploratory mediation analyses find that familiarity also indirectly increases BEV preferences by increasing positive BEV beliefs. Third, although familiarity reduces belief uncertainty, the influence of uncertainty on preferences depends on belief valence. Taken together, these results propose that familiarity plays a unique role in improving people's perceptions and attitudes towards BEVs. We situate our findings within the broader cognitive science literature and highlight a familiarity-targeted intervention aimed at improving more widespread BEV adoption