75,468 research outputs found
An easy-to-hard learning paradigm for multiple classes and multiple labels
© 2017 Weiwei Liu, Ivor W. Tsang and Klaus-Robert Müller. Many applications, such as human action recognition and object detection, can be formulated as a multiclass classification problem. One-vs-rest (OVR) is one of the most widely used approaches for multiclass classification due to its simplicity and excellent performance. However, many confusing classes in such applications will degrade its results. For example, hand clap and boxing are two confusing actions. Hand clap is easily misclassified as boxing, and vice versa. Therefore, precisely classifying confusing classes remains a challenging task. To obtain better performance for multiclass classifications that have confusing classes, we first develop a classifier chain model for multiclass classification (CCMC) to transfer class information between classifiers. Then, based on an analysis of our proposed model, we propose an easy-to-hard learning paradigm for multiclass classification to automatically identify easy and hard classes and then use the predictions from simpler classes to help solve harder classes. Similar to CCMC, the classifier chain (CC) model is also proposed by Read et al. (2009) to capture the label dependency for multi-label classification. However, CC does not consider the order of difficulty of the labels and achieves degenerated performance when there are many confusing labels. Therefore, it is non-trivial to learn the appropriate label order for CC. Motivated by our analysis for CCMC, we also propose the easy-to-hard learning paradigm for multi-label classi cation to automatically identify easy and hard labels, and then use the predictions from simpler labels to help solve harder labels. We also demonstrate that our proposed strategy can be successfully applied to a wide range of applications, such as ordinal classi cation and relationship prediction. Extensive empirical studies validate our analysis and the e-ectiveness of our proposed easy-to-hard learning strategies
Multiple Instance Curriculum Learning for Weakly Supervised Object Detection
When supervising an object detector with weakly labeled data, most existing
approaches are prone to trapping in the discriminative object parts, e.g.,
finding the face of a cat instead of the full body, due to lacking the
supervision on the extent of full objects. To address this challenge, we
incorporate object segmentation into the detector training, which guides the
model to correctly localize the full objects. We propose the multiple instance
curriculum learning (MICL) method, which injects curriculum learning (CL) into
the multiple instance learning (MIL) framework. The MICL method starts by
automatically picking the easy training examples, where the extent of the
segmentation masks agree with detection bounding boxes. The training set is
gradually expanded to include harder examples to train strong detectors that
handle complex images. The proposed MICL method with segmentation in the loop
outperforms the state-of-the-art weakly supervised object detectors by a
substantial margin on the PASCAL VOC datasets.Comment: Published in BMVC 201
Collaborative Feature Learning from Social Media
Image feature representation plays an essential role in image recognition and
related tasks. The current state-of-the-art feature learning paradigm is
supervised learning from labeled data. However, this paradigm requires
large-scale category labels, which limits its applicability to domains where
labels are hard to obtain. In this paper, we propose a new data-driven feature
learning paradigm which does not rely on category labels. Instead, we learn
from user behavior data collected on social media. Concretely, we use the image
relationship discovered in the latent space from the user behavior data to
guide the image feature learning. We collect a large-scale image and user
behavior dataset from Behance.net. The dataset consists of 1.9 million images
and over 300 million view records from 1.9 million users. We validate our
feature learning paradigm on this dataset and find that the learned feature
significantly outperforms the state-of-the-art image features in learning
better image similarities. We also show that the learned feature performs
competitively on various recognition benchmarks
Self Paced Deep Learning for Weakly Supervised Object Detection
In a weakly-supervised scenario object detectors need to be trained using
image-level annotation alone. Since bounding-box-level ground truth is not
available, most of the solutions proposed so far are based on an iterative,
Multiple Instance Learning framework in which the current classifier is used to
select the highest-confidence boxes in each image, which are treated as
pseudo-ground truth in the next training iteration. However, the errors of an
immature classifier can make the process drift, usually introducing many of
false positives in the training dataset. To alleviate this problem, we propose
in this paper a training protocol based on the self-paced learning paradigm.
The main idea is to iteratively select a subset of images and boxes that are
the most reliable, and use them for training. While in the past few years
similar strategies have been adopted for SVMs and other classifiers, we are the
first showing that a self-paced approach can be used with deep-network-based
classifiers in an end-to-end training pipeline. The method we propose is built
on the fully-supervised Fast-RCNN architecture and can be applied to similar
architectures which represent the input image as a bag of boxes. We show
state-of-the-art results on Pascal VOC 2007, Pascal VOC 2010 and ILSVRC 2013.
On ILSVRC 2013 our results based on a low-capacity AlexNet network outperform
even those weakly-supervised approaches which are based on much higher-capacity
networks.Comment: To appear at IEEE Transactions on PAM
- …