229,755 research outputs found

    Efficient Learning of Linear Separators under Bounded Noise

    Full text link
    We study the learnability of linear separators in ℜd\Re^d in the presence of bounded (a.k.a Massart) noise. This is a realistic generalization of the random classification noise model, where the adversary can flip each example xx with probability η(x)≤η\eta(x) \leq \eta. We provide the first polynomial time algorithm that can learn linear separators to arbitrarily small excess error in this noise model under the uniform distribution over the unit ball in ℜd\Re^d, for some constant value of η\eta. While widely studied in the statistical learning theory community in the context of getting faster convergence rates, computationally efficient algorithms in this model had remained elusive. Our work provides the first evidence that one can indeed design algorithms achieving arbitrarily small excess error in polynomial time under this realistic noise model and thus opens up a new and exciting line of research. We additionally provide lower bounds showing that popular algorithms such as hinge loss minimization and averaging cannot lead to arbitrarily small excess error under Massart noise, even under the uniform distribution. Our work instead, makes use of a margin based technique developed in the context of active learning. As a result, our algorithm is also an active learning algorithm with label complexity that is only a logarithmic the desired excess error ϵ\epsilon

    Probabilistic Fisher discriminant analysis: A robust and flexible alternative to Fisher discriminant analysis

    No full text
    International audienceFisher discriminant analysis (FDA) is a popular and powerful method for dimensionality reduction and classification. Unfortunately, the optimality of the dimension reduction provided by FDA is only proved in the homoscedastic case. In addition, FDA is known to have poor performances in the cases of label noise and sparse labeled data. To overcome these limitations, this work proposes a probabilistic framework for FDA which relaxes the homoscedastic assumption on the class covariance matrices and adds a term to explicitly model the non-discriminative information. This allows the proposed method to be robust to label noise and to be used in the semi-supervised context. Experiments on real-world datasets show that the proposed approach works at least as well as FDA in standard situations and outperforms it in the label noise and sparse label cases

    Dense prediction of label noise for learning building extraction from aerial drone imagery

    Get PDF
    Label noise is a commonly encountered problem in learning building extraction tasks; its presence can reduce performance and increase learning complexity. This is especially true for cases where high resolution aerial drone imagery is used, as the labels may not perfectly correspond/align with the actual objects in the imagery. In general machine learning and computer vision context, labels refer to the associated class of data, and in remote sensing-based building extraction refer to pixel-level classes. Dense label noise in building extraction tasks has rarely been formalized and assessed. We formulate a taxonomy of label noise models for building extraction tasks, which incorporates both pixel-wise and dense models. While learning dense prediction under label noise, the differences between the ground truth clean label and observed noisy label can be encoded by error matrices indicating locations and type of noisy pixel-level labels. In this work, we explicitly learn to approximate error matrices for improving building extraction performance; essentially, learning dense prediction of label noise as a subtask of a larger building extraction task. We propose two new model frameworks for learning building extraction under dense real-world label noise, and consequently two new network architectures, which approximate the error matrices as intermediate predictions. The first model learns the general error matrix as an intermediate step and the second model learns the false positive and false-negative error matrices independently, as intermediate steps. Approximating intermediate error matrices can generate label noise saliency maps, for identifying labels having higher chances of being mis-labelled. We have used ultra-high-resolution aerial images, noisy observed labels from OpenStreetMap, and clean labels obtained after careful annotation by the authors. When compared to the baseline model trained and tested using clean labels, our intermediate false positive-false negative error matrix model provides Intersection-Over-Union gain of 2.74% and F1-score gain of 1.75% on the independent test set. Furthermore, our proposed models provide much higher recall than currently used deep learning models for building extraction, while providing comparable precision. We show that intermediate false positive-false negative error matrix approximation can improve performance under label noise

    Label Propagation for Graph Label Noise

    Full text link
    Label noise is a common challenge in large datasets, as it can significantly degrade the generalization ability of deep neural networks. Most existing studies focus on noisy labels in computer vision; however, graph models encompass both node features and graph topology as input, and become more susceptible to label noise through message-passing mechanisms. Recently, only a few works have been proposed to tackle the label noise on graphs. One major limitation is that they assume the graph is homophilous and the labels are smoothly distributed. Nevertheless, real-world graphs may contain varying degrees of heterophily or even be heterophily-dominated, leading to the inadequacy of current methods. In this paper, we study graph label noise in the context of arbitrary heterophily, with the aim of rectifying noisy labels and assigning labels to previously unlabeled nodes. We begin by conducting two empirical analyses to explore the impact of graph homophily on graph label noise. Following observations, we propose a simple yet efficient algorithm, denoted as LP4GLN. Specifically, LP4GLN is an iterative algorithm with three steps: (1) reconstruct the graph to recover the homophily property, (2) utilize label propagation to rectify the noisy labels, (3) select high-confidence labels to retain for the next iteration. By iterating these steps, we obtain a set of correct labels, ultimately achieving high accuracy in the node classification task. The theoretical analysis is also provided to demonstrate its remarkable denoising "effect". Finally, we conduct experiments on 10 benchmark datasets under varying graph heterophily levels and noise types, comparing the performance of LP4GLN with 7 typical baselines. Our results illustrate the superior performance of the proposed LP4GLN

    Effect of Label Noise on Robustness of Deep Neural Network Object Detectors

    Get PDF
    Label noise is a primary point of interest for safety concerns in previous works as it affects the robustness of a machine learning system by a considerable amount. This paper studies the sensitivity of object detection loss functions to label noise in bounding box detection tasks. Although label noise has been widely studied in the classification context, less attention is paid to its effect on object detection. We characterize different types of label noise and concentrate on the most common type of annotation error, which is missing labels. We simulate missing labels by deliberately removing bounding boxes at training time and study its effect on different deep learning object detection architectures and their loss functions. Our primary focus is on comparing two particular loss functions: cross-entropy loss and focal loss. We also experiment on the effect of different focal loss hyperparameter values with varying amounts of noise in the datasets and discover that even up to 50% missing labels can be tolerated with an appropriate selection of hyperparameters. The results suggest that focal loss is more sensitive to label noise, but increasing the gamma value can boost its robustness.acceptedVersionPeer reviewe
    • …
    corecore