16,712 research outputs found
False Discovery Rate Controlled Heterogeneous Treatment Effect Detection for Online Controlled Experiments
Online controlled experiments (a.k.a. A/B testing) have been used as the
mantra for data-driven decision making on feature changing and product shipping
in many Internet companies. However, it is still a great challenge to
systematically measure how every code or feature change impacts millions of
users with great heterogeneity (e.g. countries, ages, devices). The most
commonly used A/B testing framework in many companies is based on Average
Treatment Effect (ATE), which cannot detect the heterogeneity of treatment
effect on users with different characteristics. In this paper, we propose
statistical methods that can systematically and accurately identify
Heterogeneous Treatment Effect (HTE) of any user cohort of interest (e.g.
mobile device type, country), and determine which factors (e.g. age, gender) of
users contribute to the heterogeneity of the treatment effect in an A/B test.
By applying these methods on both simulation data and real-world
experimentation data, we show how they work robustly with controlled low False
Discover Rate (FDR), and at the same time, provides us with useful insights
about the heterogeneity of identified user groups. We have deployed a toolkit
based on these methods, and have used it to measure the Heterogeneous Treatment
Effect of many A/B tests at Snap
Interpretable Subgroup Discovery in Treatment Effect Estimation with Application to Opioid Prescribing Guidelines
The dearth of prescribing guidelines for physicians is one key driver of the
current opioid epidemic in the United States. In this work, we analyze medical
and pharmaceutical claims data to draw insights on characteristics of patients
who are more prone to adverse outcomes after an initial synthetic opioid
prescription. Toward this end, we propose a generative model that allows
discovery from observational data of subgroups that demonstrate an enhanced or
diminished causal effect due to treatment. Our approach models these
sub-populations as a mixture distribution, using sparsity to enhance
interpretability, while jointly learning nonlinear predictors of the potential
outcomes to better adjust for confounding. The approach leads to
human-interpretable insights on discovered subgroups, improving the practical
utility for decision suppor
Online Model Evaluation in a Large-Scale Computational Advertising Platform
Online media provides opportunities for marketers through which they can
deliver effective brand messages to a wide range of audiences. Advertising
technology platforms enable advertisers to reach their target audience by
delivering ad impressions to online users in real time. In order to identify
the best marketing message for a user and to purchase impressions at the right
price, we rely heavily on bid prediction and optimization models. Even though
the bid prediction models are well studied in the literature, the equally
important subject of model evaluation is usually overlooked. Effective and
reliable evaluation of an online bidding model is crucial for making faster
model improvements as well as for utilizing the marketing budgets more
efficiently. In this paper, we present an experimentation framework for bid
prediction models where our focus is on the practical aspects of model
evaluation. Specifically, we outline the unique challenges we encounter in our
platform due to a variety of factors such as heterogeneous goal definitions,
varying budget requirements across different campaigns, high seasonality and
the auction-based environment for inventory purchasing. Then, we introduce
return on investment (ROI) as a unified model performance (i.e., success)
metric and explain its merits over more traditional metrics such as
click-through rate (CTR) or conversion rate (CVR). Most importantly, we discuss
commonly used evaluation and metric summarization approaches in detail and
propose a more accurate method for online evaluation of new experimental models
against the baseline. Our meta-analysis-based approach addresses various
shortcomings of other methods and yields statistically robust conclusions that
allow us to conclude experiments more quickly in a reliable manner. We
demonstrate the effectiveness of our evaluation strategy on real campaign data
through some experiments.Comment: Accepted to ICDM201
- …