112 research outputs found
CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise
In this paper, we study the problem of learning image classification models
with label noise. Existing approaches depending on human supervision are
generally not scalable as manually identifying correct or incorrect labels is
time-consuming, whereas approaches not relying on human supervision are
scalable but less effective. To reduce the amount of human supervision for
label noise cleaning, we introduce CleanNet, a joint neural embedding network,
which only requires a fraction of the classes being manually verified to
provide the knowledge of label noise that can be transferred to other classes.
We further integrate CleanNet and conventional convolutional neural network
classifier into one framework for image classification learning. We demonstrate
the effectiveness of the proposed algorithm on both of the label noise
detection task and the image classification on noisy data task on several
large-scale datasets. Experimental results show that CleanNet can reduce label
noise detection error rate on held-out classes where no human supervision
available by 41.5% compared to current weakly supervised methods. It also
achieves 47% of the performance gain of verifying all images with only 3.2%
images verified on an image classification task. Source code and dataset will
be available at kuanghuei.github.io/CleanNetProject.Comment: Accepted to CVPR 201
LoANs: Weakly Supervised Object Detection with Localizer Assessor Networks
Recently, deep neural networks have achieved remarkable performance on the
task of object detection and recognition. The reason for this success is mainly
grounded in the availability of large scale, fully annotated datasets, but the
creation of such a dataset is a complicated and costly task. In this paper, we
propose a novel method for weakly supervised object detection that simplifies
the process of gathering data for training an object detector. We train an
ensemble of two models that work together in a student-teacher fashion. Our
student (localizer) is a model that learns to localize an object, the teacher
(assessor) assesses the quality of the localization and provides feedback to
the student. The student uses this feedback to learn how to localize objects
and is thus entirely supervised by the teacher, as we are using no labels for
training the localizer. In our experiments, we show that our model is very
robust to noise and reaches competitive performance compared to a
state-of-the-art fully supervised approach. We also show the simplicity of
creating a new dataset, based on a few videos (e.g. downloaded from YouTube)
and artificially generated data.Comment: To appear in AMV18. Code, datasets and models available at
https://github.com/Bartzi/loan
CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison
Large, labeled datasets have driven deep learning methods to achieve
expert-level performance on a variety of medical imaging tasks. We present
CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240
patients. We design a labeler to automatically detect the presence of 14
observations in radiology reports, capturing uncertainties inherent in
radiograph interpretation. We investigate different approaches to using the
uncertainty labels for training convolutional neural networks that output the
probability of these observations given the available frontal and lateral
radiographs. On a validation set of 200 chest radiographic studies which were
manually annotated by 3 board-certified radiologists, we find that different
uncertainty approaches are useful for different pathologies. We then evaluate
our best model on a test set composed of 500 chest radiographic studies
annotated by a consensus of 5 board-certified radiologists, and compare the
performance of our model to that of 3 additional radiologists in the detection
of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the
model ROC and PR curves lie above all 3 radiologist operating points. We
release the dataset to the public as a standard benchmark to evaluate
performance of chest radiograph interpretation models.
The dataset is freely available at
https://stanfordmlgroup.github.io/competitions/chexpert .Comment: Published in AAAI 201
- …