1,262 research outputs found
Adversarial Complementary Learning for Weakly Supervised Object Localization
© 2018 IEEE. In this work, we propose Adversarial Complementary Learning (ACoL) to automatically localize integral objects of semantic interest with weak supervision. We first mathematically prove that class localization maps can be obtained by directly selecting the class-specific feature maps of the last convolutional layer, which paves a simple way to identify object regions. We then present a simple network architecture including two parallel-classifiers for object localization. Specifically, we leverage one classification branch to dynamically localize some discriminative object regions during the forward pass. Although it is usually responsive to sparse parts of the target objects, this classifier can drive the counterpart classifier to discover new and complementary object regions by erasing its discovered regions from the feature maps. With such an adversarial learning, the two parallel-classifiers are forced to leverage complementary object regions for classification and can finally generate integral object localization together. The merits of ACoL are mainly two-fold: 1) it can be trained in an end-to-end manner; 2) dynamically erasing enables the counterpart classifier to discover complementary object regions more effectively. We demonstrate the superiority of our ACoL approach in a variety of experiments. In particular, the Top-1 localization error rate on the ILSVRC dataset is 45.14%, which is the new state-of-the-art
Zero-Annotation Object Detection with Web Knowledge Transfer
Object detection is one of the major problems in computer vision, and has
been extensively studied. Most of the existing detection works rely on
labor-intensive supervision, such as ground truth bounding boxes of objects or
at least image-level annotations. On the contrary, we propose an object
detection method that does not require any form of human annotation on target
tasks, by exploiting freely available web images. In order to facilitate
effective knowledge transfer from web images, we introduce a multi-instance
multi-label domain adaption learning framework with two key innovations. First
of all, we propose an instance-level adversarial domain adaptation network with
attention on foreground objects to transfer the object appearances from web
domain to target domain. Second, to preserve the class-specific semantic
structure of transferred object features, we propose a simultaneous transfer
mechanism to transfer the supervision across domains through pseudo strong
label generation. With our end-to-end framework that simultaneously learns a
weakly supervised detector and transfers knowledge across domains, we achieved
significant improvements over baseline methods on the benchmark datasets.Comment: Accepted in ECCV 201
FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference
The main obstacle to weakly supervised semantic image segmentation is the
difficulty of obtaining pixel-level information from coarse image-level
annotations. Most methods based on image-level annotations use localization
maps obtained from the classifier, but these only focus on the small
discriminative parts of objects and do not capture precise boundaries.
FickleNet explores diverse combinations of locations on feature maps created by
generic deep neural networks. It selects hidden units randomly and then uses
them to obtain activation scores for image classification. FickleNet implicitly
learns the coherence of each location in the feature maps, resulting in a
localization map which identifies both discriminative and other parts of
objects. The ensemble effects are obtained from a single network by selecting
random hidden unit pairs, which means that a variety of localization maps are
generated from a single image. Our approach does not require any additional
training steps and only adds a simple layer to a standard convolutional neural
network; nevertheless it outperforms recent comparable techniques on the Pascal
VOC 2012 benchmark in both weakly and semi-supervised settings.Comment: To appear in CVPR 201
- …