8 research outputs found
LSDA: Large Scale Detection Through Adaptation
A major challenge in scaling object detection is the difficulty of obtaining
labeled images for large numbers of categories. Recently, deep convolutional
neural networks (CNNs) have emerged as clear winners on object classification
benchmarks, in part due to training with 1.2M+ labeled classification images.
Unfortunately, only a small fraction of those labels are available for the
detection task. It is much cheaper and easier to collect large quantities of
image-level labels from search engines than it is to collect detection data and
label it with precise bounding boxes. In this paper, we propose Large Scale
Detection through Adaptation (LSDA), an algorithm which learns the difference
between the two tasks and transfers this knowledge to classifiers for
categories without bounding box annotated data, turning them into detectors.
Our method has the potential to enable detection for the tens of thousands of
categories that lack bounding box annotations, yet have plenty of
classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge
demonstrates the efficacy of our approach. This algorithm enables us to produce
a >7.6K detector by using available classification data from leaf nodes in the
ImageNet tree. We additionally demonstrate how to modify our architecture to
produce a fast detector (running at 2fps for the 7.6K detector). Models and
software are available a
Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks
How can we reuse existing knowledge, in the form of available datasets, when
solving a new and apparently unrelated target task from a set of unlabeled
data? In this work we make a first contribution to answer this question in the
context of image classification. We frame this quest as an active learning
problem and use zero-shot classifiers to guide the learning process by linking
the new task to the existing classifiers. By revisiting the dual formulation of
adaptive SVM, we reveal two basic conditions to choose greedily only the most
relevant samples to be annotated. On this basis we propose an effective active
learning algorithm which learns the best possible target classification model
with minimum human labeling effort. Extensive experiments on two challenging
datasets show the value of our approach compared to the state-of-the-art active
learning methodologies, as well as its potential to reuse past datasets with
minimal effort for future tasks
Indexing ensembles of exemplar-SVMs with rejecting taxonomies
Ensembles of Exemplar-SVMs have been used for a wide variety of tasks, such as object detection, segmentation, label transfer and mid-level feature learning. In order to make this technique effective though a large collection of classifiers is needed, which often makes the evaluation phase prohibitive. To overcome this issue we exploit the joint distribution of exemplar classifier scores to build a taxonomy capable of indexing each Exemplar-SVM and enabling a fast evaluation of the whole ensemble. We experiment with the Pascal 2007 benchmark on the task of object detection and on a simple segmentation task, in order to verify the robustness of our indexing data structure with reference to the standard Ensemble. We also introduce a rejection strategy to discard not relevant image patches for a more efficient access to the data
Alpha Net: Adaptation with Composition in Classifier Space
Deep learning classification models typically train poorly on classes with
small numbers of examples. Motivated by the human ability to solve this task,
models have been developed that transfer knowledge from classes with many
examples to learn classes with few examples. Critically, the majority of these
models transfer knowledge within model feature space. In this work, we
demonstrate that transferring knowledge within classified space is more
effective and efficient. Specifically, by linearly combining strong nearest
neighbor classifiers along with a weak classifier, we are able to compose a
stronger classifier. Uniquely, our model can be implemented on top of any
existing classification model that includes a classifier layer. We showcase the
success of our approach in the task of long-tailed recognition, whereby the
classes with few examples, otherwise known as the "tail" classes, suffer the
most in performance and are the most challenging classes to learn. Using
classifier-level knowledge transfer, we are able to drastically improve - by a
margin as high as 12.6% - the state-of-the-art performance on the "tail"
categories.Comment: Under revie