76 research outputs found
Private Learning Implies Online Learning: An Efficient Reduction
We study the relationship between the notions of differentially private
learning and online learning in games. Several recent works have shown that
differentially private learning implies online learning, but an open problem of
Neel, Roth, and Wu \cite{NeelAaronRoth2018} asks whether this implication is
{\it efficient}. Specifically, does an efficient differentially private learner
imply an efficient online learner? In this paper we resolve this open question
in the context of pure differential privacy. We derive an efficient black-box
reduction from differentially private learning to online learning from expert
advice
Generalization Error Bounds via th Central Moments of the Information Density
We present a general approach to deriving bounds on the generalization error
of randomized learning algorithms. Our approach can be used to obtain bounds on
the average generalization error as well as bounds on its tail probabilities,
both for the case in which a new hypothesis is randomly generated every time
the algorithm is used - as often assumed in the probably approximately correct
(PAC)-Bayesian literature - and in the single-draw case, where the hypothesis
is extracted only once. For this last scenario, we present a novel bound that
is explicit in the central moments of the information density. The bound
reveals that the higher the order of the information density moment that can be
controlled, the milder the dependence of the generalization bound on the
desired confidence level. Furthermore, we use tools from binary hypothesis
testing to derive a second bound, which is explicit in the tail of the
information density. This bound confirms that a fast decay of the tail of the
information density yields a more favorable dependence of the generalization
bound on the confidence level.Comment: ISIT 2020. Corrected Corollary 7 and the discussion in section II-
Generalization Error Bounds via mth Central Moments of the Information Density
We present a general approach to deriving bounds on the generalization error of randomized learning algorithms. Our approach can be used to obtain bounds on the average generalization error as well as bounds on its tail probabilities, both for the case in which a new hypothesis is randomly generated every time the algorithm is used - as often assumed in the probably approximately correct (PAC)-Bayesian literature - and in the single-draw case, where the hypothesis is extracted only once.For this last scenario, we present a novel bound that is explicit in the central moments of the information density. The bound reveals that the higher the order of the information density moment that can be controlled, the milder the dependence of the generalization bound on the desired confidence level.Furthermore, we use tools from binary hypothesis testing to derive a second bound, which is explicit in the tail of the information density. This bound confirms that a fast decay of the tail of the information density yields a more favorable dependence of the generalization bound on the confidence level
Validation of massively-parallel adaptive testing using dynamic control matching
A/B testing is a widely-used paradigm within marketing optimization because
it promises identification of causal effects and because it is implemented out
of the box in most messaging delivery software platforms. Modern businesses,
however, often run many A/B/n tests at the same time and in parallel, and
package many content variations into the same messages, not all of which are
part of an explicit test. Whether as the result of many teams testing at the
same time, or as part of a more sophisticated reinforcement learning (RL)
approach that continuously adapts tests and test condition assignment based on
previous results, dynamic parallel testing cannot be evaluated the same way
traditional A/B tests are evaluated. This paper presents a method for
disentangling the causal effects of the various tests under conditions of
continuous test adaptation, using a matched-synthetic control group that adapts
alongside the tests
- …