We propose strategies to estimate and make inference on key features of
heterogeneous effects in randomized experiments. These key features include
best linear predictors of the effects using machine learning proxies, average
effects sorted by impact groups, and average characteristics of most and least
impacted units. The approach is valid in high dimensional settings, where the
effects are proxied by machine learning methods. We post-process these proxies
into the estimates of the key features. Our approach is generic, it can be used
in conjunction with penalized methods, deep and shallow neural networks,
canonical and new random forests, boosted trees, and ensemble methods. It does
not rely on strong assumptions. In particular, we don't require conditions for
consistency of the machine learning methods. Estimation and inference relies on
repeated data splitting to avoid overfitting and achieve validity. For
inference, we take medians of p-values and medians of confidence intervals,
resulting from many different data splits, and then adjust their nominal level
to guarantee uniform validity. This variational inference method is shown to be
uniformly valid and quantifies the uncertainty coming from both parameter
estimation and data splitting. We illustrate the use of the approach with two
randomized experiments in development on the effects of microcredit and nudges
to stimulate immunization demand.Comment: 53 pages, 6 figures, 15 table