29 research outputs found

    Conditional Linear Regression

    Get PDF
    Work in machine learning and statistics commonly focuses on building models that capture the vast majority of data, possibly ignoring a segment of the population as outliers. However, there may not exist a good, simple model for the distribution, so we seek to find a small subset where there exists such a model. We give a computationally efficient algorithm with theoretical analysis for the conditional linear regression task, which is the joint task of identifying a significant portion of the data distribution, described by a k-DNF, along with a linear predictor on that portion with a small loss. In contrast to work in robust statistics on small subsets, our loss bounds do not feature a dependence on the density of the portion we fit, and compared to previous work on conditional linear regression, our algorithm’s running time scales polynomially with the sparsity of the linear predictor. We also demonstrate empirically that our algorithm can leverage this advantage to obtain a k-DNF with a better linear predictor in practice

    Semi-verified PAC Learning from the Crowd with Pairwise Comparisons

    Full text link
    We study the problem of crowdsourced PAC learning of threshold functions with pairwise comparisons. This is a challenging problem and only recently have query-efficient algorithms been established in the scenario where the majority of the crowd are perfect. In this work, we investigate the significantly more challenging case that the majority are incorrect, which in general renders learning impossible. We show that under the semi-verified model of Charikar~et~al.~(2017), where we have (limited) access to a trusted oracle who always returns the correct annotation, it is possible to PAC learn the underlying hypothesis class while drastically mitigating the labeling cost via the more easily obtained comparison queries. Orthogonal to recent developments in semi-verified or list-decodable learning that crucially rely on data distributional assumptions, our PAC guarantee holds by exploring the wisdom of the crowd.Comment: v2 incorporates a simpler Filter algorithm, thus the technical assumption (in v1) is no longer needed. v2 also reorganizes and emphasizes new algorithm component

    Asymptotic Characterisation of Robust Empirical Risk Minimisation Performance in the Presence of Outliers

    Full text link
    We study robust linear regression in high-dimension, when both the dimension dd and the number of data points nn diverge with a fixed ratio α=n/d\alpha=n/d, and study a data model that includes outliers. We provide exact asymptotics for the performances of the empirical risk minimisation (ERM) using ℓ2\ell_2-regularised ℓ2\ell_2, ℓ1\ell_1, and Huber loss, which are the standard approach to such problems. We focus on two metrics for the performance: the generalisation error to similar datasets with outliers, and the estimation error of the original, unpolluted function. Our results are compared with the information theoretic Bayes-optimal estimation bound. For the generalization error, we find that optimally-regularised ERM is asymptotically consistent in the large sample complexity limit if one perform a simple calibration, and compute the rates of convergence. For the estimation error however, we show that due to a norm calibration mismatch, the consistency of the estimator requires an oracle estimate of the optimal norm, or the presence of a cross-validation set not corrupted by the outliers. We examine in detail how performance depends on the loss function and on the degree of outlier corruption in the training set and identify a region of parameters where the optimal performance of the Huber loss is identical to that of the ℓ2\ell_2 loss, offering insights into the use cases of different loss functions

    A Stress-Free Sum-Of-Squares Lower Bound for Coloring

    Get PDF
    corecore