3,662 research outputs found
Revisiting knowledge transfer for training object class detectors
We propose to revisit knowledge transfer for training object detectors on
target classes from weakly supervised training images, helped by a set of
source classes with bounding-box annotations. We present a unified knowledge
transfer framework based on training a single neural network multi-class object
detector over all source classes, organized in a semantic hierarchy. This
generates proposals with scores at multiple levels in the hierarchy, which we
use to explore knowledge transfer over a broad range of generality, ranging
from class-specific (bicycle to motorbike) to class-generic (objectness to any
class). Experiments on the 200 object classes in the ILSVRC 2013 detection
dataset show that our technique: (1) leads to much better performance on the
target classes (70.3% CorLoc, 36.9% mAP) than a weakly supervised baseline
which uses manually engineered objectness [11] (50.5% CorLoc, 25.4% mAP). (2)
delivers target object detectors reaching 80% of the mAP of their fully
supervised counterparts. (3) outperforms the best reported transfer learning
results on this dataset (+41% CorLoc and +3% mAP over [18, 46], +16.2% mAP over
[32]). Moreover, we also carry out several across-dataset knowledge transfer
experiments [27, 24, 35] and find that (4) our technique outperforms the weakly
supervised baseline in all dataset pairs by 1.5x-1.9x, establishing its general
applicability.Comment: CVPR 1
Deep Self-Taught Learning for Weakly Supervised Object Localization
Most existing weakly supervised localization (WSL) approaches learn detectors
by finding positive bounding boxes based on features learned with image-level
supervision. However, those features do not contain spatial location related
information and usually provide poor-quality positive samples for training a
detector. To overcome this issue, we propose a deep self-taught learning
approach, which makes the detector learn the object-level features reliable for
acquiring tight positive samples and afterwards re-train itself based on them.
Consequently, the detector progressively improves its detection ability and
localizes more informative positive samples. To implement such self-taught
learning, we propose a seed sample acquisition method via image-to-object
transferring and dense subgraph discovery to find reliable positive samples for
initializing the detector. An online supportive sample harvesting scheme is
further proposed to dynamically select the most confident tight positive
samples and train the detector in a mutual boosting way. To prevent the
detector from being trapped in poor optima due to overfitting, we propose a new
relative improvement of predicted CNN scores for guiding the self-taught
learning process. Extensive experiments on PASCAL 2007 and 2012 show that our
approach outperforms the state-of-the-arts, strongly validating its
effectiveness.Comment: Accepted as spotlight paper by CVPR 201
Weakly-supervised Visual Grounding of Phrases with Linguistic Structures
We propose a weakly-supervised approach that takes image-sentence pairs as
input and learns to visually ground (i.e., localize) arbitrary linguistic
phrases, in the form of spatial attention masks. Specifically, the model is
trained with images and their associated image-level captions, without any
explicit region-to-phrase correspondence annotations. To this end, we introduce
an end-to-end model which learns visual groundings of phrases with two types of
carefully designed loss functions. In addition to the standard discriminative
loss, which enforces that attended image regions and phrases are consistently
encoded, we propose a novel structural loss which makes use of the parse tree
structures induced by the sentences. In particular, we ensure complementarity
among the attention masks that correspond to sibling noun phrases, and
compositionality of attention masks among the children and parent phrases, as
defined by the sentence parse tree. We validate the effectiveness of our
approach on the Microsoft COCO and Visual Genome datasets.Comment: CVPR 201
- …