2,981 research outputs found
Weakly Supervised Object Localization with Multi-fold Multiple Instance Learning
Object category localization is a challenging problem in computer vision.
Standard supervised training requires bounding box annotations of object
instances. This time-consuming annotation process is sidestepped in weakly
supervised learning. In this case, the supervised information is restricted to
binary labels that indicate the absence/presence of object instances in the
image, without their locations. We follow a multiple-instance learning approach
that iteratively trains the detector and infers the object locations in the
positive training images. Our main contribution is a multi-fold multiple
instance learning procedure, which prevents training from prematurely locking
onto erroneous object locations. This procedure is particularly important when
using high-dimensional representations, such as Fisher vectors and
convolutional neural network features. We also propose a window refinement
method, which improves the localization accuracy by incorporating an objectness
prior. We present a detailed experimental evaluation using the PASCAL VOC 2007
dataset, which verifies the effectiveness of our approach.Comment: To appear in IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI
Mining Object Parts from CNNs via Active Question-Answering
Given a convolutional neural network (CNN) that is pre-trained for object
classification, this paper proposes to use active question-answering to
semanticize neural patterns in conv-layers of the CNN and mine part concepts.
For each part concept, we mine neural patterns in the pre-trained CNN, which
are related to the target part, and use these patterns to construct an And-Or
graph (AOG) to represent a four-layer semantic hierarchy of the part. As an
interpretable model, the AOG associates different CNN units with different
explicit object parts. We use an active human-computer communication to
incrementally grow such an AOG on the pre-trained CNN as follows. We allow the
computer to actively identify objects, whose neural patterns cannot be
explained by the current AOG. Then, the computer asks human about the
unexplained objects, and uses the answers to automatically discover certain CNN
patterns corresponding to the missing knowledge. We incrementally grow the AOG
to encode new knowledge discovered during the active-learning process. In
experiments, our method exhibits high learning efficiency. Our method uses
about 1/6-1/3 of the part annotations for training, but achieves similar or
better part-localization performance than fast-RCNN methods.Comment: Published in CVPR 201
Image Co-localization by Mimicking a Good Detector's Confidence Score Distribution
Given a set of images containing objects from the same category, the task of
image co-localization is to identify and localize each instance. This paper
shows that this problem can be solved by a simple but intriguing idea, that is,
a common object detector can be learnt by making its detection confidence
scores distributed like those of a strongly supervised detector. More
specifically, we observe that given a set of object proposals extracted from an
image that contains the object of interest, an accurate strongly supervised
object detector should give high scores to only a small minority of proposals,
and low scores to most of them. Thus, we devise an entropy-based objective
function to enforce the above property when learning the common object
detector. Once the detector is learnt, we resort to a segmentation approach to
refine the localization. We show that despite its simplicity, our approach
outperforms state-of-the-art methods.Comment: Accepted to Proc. European Conf. Computer Vision 201
Self Paced Deep Learning for Weakly Supervised Object Detection
In a weakly-supervised scenario object detectors need to be trained using
image-level annotation alone. Since bounding-box-level ground truth is not
available, most of the solutions proposed so far are based on an iterative,
Multiple Instance Learning framework in which the current classifier is used to
select the highest-confidence boxes in each image, which are treated as
pseudo-ground truth in the next training iteration. However, the errors of an
immature classifier can make the process drift, usually introducing many of
false positives in the training dataset. To alleviate this problem, we propose
in this paper a training protocol based on the self-paced learning paradigm.
The main idea is to iteratively select a subset of images and boxes that are
the most reliable, and use them for training. While in the past few years
similar strategies have been adopted for SVMs and other classifiers, we are the
first showing that a self-paced approach can be used with deep-network-based
classifiers in an end-to-end training pipeline. The method we propose is built
on the fully-supervised Fast-RCNN architecture and can be applied to similar
architectures which represent the input image as a bag of boxes. We show
state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013.
On ILSVRC 2013 our results based on a low-capacity AlexNet network outperform
even those weakly-supervised approaches which are based on much higher-capacity
networks.Comment: To appear at IEEE Transactions on PAM
- …