1 research outputs found
Pairwise Similarity Knowledge Transfer for Weakly Supervised Object Localization
Weakly Supervised Object Localization (WSOL) methods only require image level
labels as opposed to expensive bounding box annotations required by fully
supervised algorithms. We study the problem of learning localization model on
target classes with weakly supervised image labels, helped by a fully annotated
source dataset. Typically, a WSOL model is first trained to predict class
generic objectness scores on an off-the-shelf fully supervised source dataset
and then it is progressively adapted to learn the objects in the weakly
supervised target dataset. In this work, we argue that learning only an
objectness function is a weak form of knowledge transfer and propose to learn a
classwise pairwise similarity function that directly compares two input
proposals as well. The combined localization model and the estimated object
annotations are jointly learned in an alternating optimization paradigm as is
typically done in standard WSOL methods. In contrast to the existing work that
learns pairwise similarities, our approach optimizes a unified objective with
convergence guarantee and it is computationally efficient for large-scale
applications. Experiments on the COCO and ILSVRC 2013 detection datasets show
that the performance of the localization model improves significantly with the
inclusion of pairwise similarity function. For instance, in the ILSVRC dataset,
the Correct Localization (CorLoc) performance improves from 72.8% to 78.2%
which is a new state-of-the-art for WSOL task in the context of knowledge
transfer.Comment: ECCV 2020. formerly "In Defense of Graph Inference Algorithms for
Weakly Supervised Object Localization