2,441 research outputs found
Mutual Exclusivity Loss for Semi-Supervised Deep Learning
In this paper we consider the problem of semi-supervised learning with deep
Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated
on the observation that unlabeled data is cheap and can be used to improve the
accuracy of classifiers. In this paper we propose an unsupervised
regularization term that explicitly forces the classifier's prediction for
multiple classes to be mutually-exclusive and effectively guides the decision
boundary to lie on the low density space between the manifolds corresponding to
different classes of data. Our proposed approach is general and can be used
with any backpropagation-based learning method. We show through different
experiments that our method can improve the object recognition performance of
ConvNets using unlabeled data.Comment: 5 pages, 1 figures, ICIP 201
High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation
The task of image-level weakly-supervised semantic segmentation (WSSS) has
gained popularity in recent years, as it reduces the vast data annotation cost
for training segmentation models. The typical approach for WSSS involves
training an image classification network using global average pooling (GAP) on
convolutional feature maps. This enables the estimation of object locations
based on class activation maps (CAMs), which identify the importance of image
regions. The CAMs are then used to generate pseudo-labels, in the form of
segmentation masks, to supervise a segmentation model in the absence of
pixel-level ground truth. In case of the SEAM baseline, a previous work
proposed to improve CAM learning in two ways: (1) Importance sampling, which is
a substitute for GAP, and (2) the feature similarity loss, which utilizes a
heuristic that object contours almost exclusively align with color edges in
images. In this work, we propose a different probabilistic interpretation of
CAMs for these techniques, rendering the likelihood more appropriate than the
multinomial posterior. As a result, we propose an add-on method that can boost
essentially any previous WSSS method, improving both the region similarity and
contour quality of all implemented state-of-the-art baselines. This is
demonstrated on a wide variety of baselines on the PASCAL VOC dataset.
Experiments on the MS COCO dataset show that performance gains can also be
achieved in a large-scale setting. Our code is available at
https://github.com/arvijj/hfpl
- …