65,817 research outputs found

    Anisotropic oracle inequalities in noisy quantization

    Get PDF
    The effect of errors in variables in quantization is investigated. We prove general exact and non-exact oracle inequalities with fast rates for an empirical minimization based on a noisy sample Zi=Xi+ϵi,i=1,,nZ_i=X_i+\epsilon_i,i=1,\ldots,n, where XiX_i are i.i.d. with density ff and ϵi\epsilon_i are i.i.d. with density η\eta. These rates depend on the geometry of the density ff and the asymptotic behaviour of the characteristic function of η\eta. This general study can be applied to the problem of kk-means clustering with noisy data. For this purpose, we introduce a deconvolution kk-means stochastic minimization which reaches fast rates of convergence under standard Pollard's regularity assumptions.Comment: 30 pages. arXiv admin note: text overlap with arXiv:1205.141

    Noise Tolerance under Risk Minimization

    Full text link
    In this paper we explore noise tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable{\bf unobservable} training set which is noise-free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with the ideal noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper we analyze the noise tolerance properties of risk minimization (under different loss functions), which is a generic method for learning classifiers. We show that risk minimization under 0-1 loss function has impressive noise tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude the paper with some discussion on implications of these theoretical results

    Learning From Noisy Singly-labeled Data

    Get PDF
    Supervised learning depends on annotated examples, which are taken to be the \emph{ground truth}. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality is above a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.Comment: 18 pages, 3 figure

    REALM: Robust Entropy Adaptive Loss Minimization for Improved Single-Sample Test-Time Adaptation

    Full text link
    Fully-test-time adaptation (F-TTA) can mitigate performance loss due to distribution shifts between train and test data (1) without access to the training data, and (2) without knowledge of the model training procedure. In online F-TTA, a pre-trained model is adapted using a stream of test samples by minimizing a self-supervised objective, such as entropy minimization. However, models adapted with online using entropy minimization, are unstable especially in single sample settings, leading to degenerate solutions, and limiting the adoption of TTA inference strategies. Prior works identify noisy, or unreliable, samples as a cause of failure in online F-TTA. One solution is to ignore these samples, which can lead to bias in the update procedure, slow adaptation, and poor generalization. In this work, we present a general framework for improving robustness of F-TTA to these noisy samples, inspired by self-paced learning and robust loss functions. Our proposed approach, Robust Entropy Adaptive Loss Minimization (REALM), achieves better adaptation accuracy than previous approaches throughout the adaptation process on corruptions of CIFAR-10 and ImageNet-1K, demonstrating its effectiveness.Comment: Accepted at WACV 2024, 17 pages, 7 figures, 11 table

    Imaging with highly incomplete and corrupted data

    Get PDF
    We consider the problem of imaging sparse scenes from a few noisy data using an L1-minimization approach. This problem can be cast as a linear system of the form Ap = b, where A is an N x K measurement matrix. We assume that the dimension of the unknown sparse vector p E Ck is much larger than the dimension of the data vector b E Cn, i.e. K >>N. We provide a theoretical framework that allows us to examine under what conditions the L1-minimization problem admits a solution that is close to the exact one in the presence of noise. Our analysis shows that L1-minimization is not robust for imaging with noisy data when high resolution is required. To improve the performance of L1-minimization we propose to solve instead the augmented linear system [A|C]p = b, where the N = Σ matrix C is a noise collector. It is constructed so as its column vectors provide a frame on which the noise of the data, a vector of dimension N, can be well approximated. Theoretically, the dimension Σ of the noise collector should be eN which would make its use not practical. However, our numerical results illustrate that robust results in the presence of noise can be obtained with a large enough number of columns Σ~10K.Part of this material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while the authors were in residence at the Institute for Computational and Experimental Research in Mathematics (ICERM) in Providence, RI, during the Fall 2017 semester. The work of M Moscoso was partially supported by Spanish MICINN grant FIS2016-77892-R. The work of A Novikov was partially supported by NSF grants DMS-1515187, DMS-1813943. The work of C Tsogka was partially supported by AFOSR FA9550-17-1-0238

    Making Risk Minimization Tolerant to Label Noise

    Full text link
    In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 010-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to non-uniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 010-1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm
    corecore