2 research outputs found

    Unknown Examples & Machine Learning Model Generalization

    Full text link
    Over the past decades, researchers and ML practitioners have come up with better and better ways to build, understand and improve the quality of ML models, but mostly under the key assumption that the training data is distributed identically to the testing data. In many real-world applications, however, some potential training examples are unknown to the modeler, due to sample selection bias or, more generally, covariate shift, i.e., a distribution shift between the training and deployment stage. The resulting discrepancy between training and testing distributions leads to poor generalization performance of the ML model and hence biased predictions. We provide novel algorithms that estimate the number and properties of these unknown training examples---unknown unknowns. This information can then be used to correct the training set, prior to seeing any test data. The key idea is to combine species-estimation techniques with data-driven methods for estimating the feature values for the unknown unknowns. Experiments on a variety of ML models and datasets indicate that taking the unknown examples into account can yield a more robust ML model that generalizes better

    Uncertainty about Uncertainty: Optimal Adaptive Algorithms for Estimating Mixtures of Unknown Coins

    Full text link
    Given a mixture between two populations of coins, "positive" coins that each have -- unknown and potentially different -- bias 12+Δ\geq\frac{1}{2}+\Delta and "negative" coins with bias 12Δ\leq\frac{1}{2}-\Delta, we consider the task of estimating the fraction ρ\rho of positive coins to within additive error ϵ\epsilon. We achieve an upper and lower bound of Θ(ρϵ2Δ2log1δ)\Theta(\frac{\rho}{\epsilon^2\Delta^2}\log\frac{1}{\delta}) samples for a 1δ1-\delta probability of success, where crucially, our lower bound applies to all fully-adaptive algorithms. Thus, our sample complexity bounds have tight dependence for every relevant problem parameter. A crucial component of our lower bound proof is a decomposition lemma (see Lemmas 17 and 18) showing how to assemble partially-adaptive bounds into a fully-adaptive bound, which may be of independent interest: though we invoke it for the special case of Bernoulli random variables (coins), it applies to general distributions. We present simulation results to demonstrate the practical efficacy of our approach for realistic problem parameters for crowdsourcing applications, focusing on the "rare events" regime where ρ\rho is small. The fine-grained adaptive flavor of both our algorithm and lower bound contrasts with much previous work in distributional testing and learning.Comment: Full paper updated to reflect the new result in our SODA 2021 proceedings version: our new sample complexity lower bound includes dependence on the failure probability, and hence is simultaneously tight in all of the problem parameters up to a constant multiplicative facto
    corecore