2 research outputs found
Unknown Examples & Machine Learning Model Generalization
Over the past decades, researchers and ML practitioners have come up with
better and better ways to build, understand and improve the quality of ML
models, but mostly under the key assumption that the training data is
distributed identically to the testing data. In many real-world applications,
however, some potential training examples are unknown to the modeler, due to
sample selection bias or, more generally, covariate shift, i.e., a distribution
shift between the training and deployment stage. The resulting discrepancy
between training and testing distributions leads to poor generalization
performance of the ML model and hence biased predictions. We provide novel
algorithms that estimate the number and properties of these unknown training
examples---unknown unknowns. This information can then be used to correct the
training set, prior to seeing any test data. The key idea is to combine
species-estimation techniques with data-driven methods for estimating the
feature values for the unknown unknowns. Experiments on a variety of ML models
and datasets indicate that taking the unknown examples into account can yield a
more robust ML model that generalizes better
Uncertainty about Uncertainty: Optimal Adaptive Algorithms for Estimating Mixtures of Unknown Coins
Given a mixture between two populations of coins, "positive" coins that each
have -- unknown and potentially different -- bias and
"negative" coins with bias , we consider the task of
estimating the fraction of positive coins to within additive error
. We achieve an upper and lower bound of
samples for a
probability of success, where crucially, our lower bound applies to
all fully-adaptive algorithms. Thus, our sample complexity bounds have tight
dependence for every relevant problem parameter. A crucial component of our
lower bound proof is a decomposition lemma (see Lemmas 17 and 18) showing how
to assemble partially-adaptive bounds into a fully-adaptive bound, which may be
of independent interest: though we invoke it for the special case of Bernoulli
random variables (coins), it applies to general distributions. We present
simulation results to demonstrate the practical efficacy of our approach for
realistic problem parameters for crowdsourcing applications, focusing on the
"rare events" regime where is small. The fine-grained adaptive flavor of
both our algorithm and lower bound contrasts with much previous work in
distributional testing and learning.Comment: Full paper updated to reflect the new result in our SODA 2021
proceedings version: our new sample complexity lower bound includes
dependence on the failure probability, and hence is simultaneously tight in
all of the problem parameters up to a constant multiplicative facto