Understanding of the effect of a particular treatment or a policy pertains to
many areas of interest -- ranging from political economics, marketing to
health-care and personalized treatment studies. In this paper, we develop a
non-parametric, model-free test for detecting the effects of treatment over
time that extends widely used Synthetic Control tests. The test is built on
counterfactual predictions arising from many learning algorithms. In the
Neyman-Rubin potential outcome framework with possible carry-over effects, we
show that the proposed test is asymptotically consistent for stationary, beta
mixing processes. We do not assume that class of learners captures the correct
model necessarily. We also discuss estimates of the average treatment effect,
and we provide regret bounds on the predictive performance. To the best of our
knowledge, this is the first set of results that allow for example any Random
Forest to be useful for provably valid statistical inference in the Synthetic
Control setting. In experiments, we show that our Synthetic Learner is
substantially more powerful than classical methods based on Synthetic Control
or Difference-in-Differences, especially in the presence of non-linear outcome
models