14 research outputs found
Pseudo Mask Augmented Object Detection
In this work, we present a novel and effective framework to facilitate object
detection with the instance-level segmentation information that is only
supervised by bounding box annotation. Starting from the joint object detection
and instance segmentation network, we propose to recursively estimate the
pseudo ground-truth object masks from the instance-level object segmentation
network training, and then enhance the detection network with top-down
segmentation feedbacks. The pseudo ground truth mask and network parameters are
optimized alternatively to mutually benefit each other. To obtain the promising
pseudo masks in each iteration, we embed a graphical inference that
incorporates the low-level image appearance consistency and the bounding box
annotations to refine the segmentation masks predicted by the segmentation
network. Our approach progressively improves the object detection performance
by incorporating the detailed pixel-wise information learned from the
weakly-supervised segmentation network. Extensive evaluation on the detection
task in PASCAL VOC 2007 and 2012 [12] verifies that the proposed approach is
effective
segDeepM: Exploiting Segmentation and Context in Deep Neural Networks for Object Detection
In this paper, we propose an approach that exploits object segmentation in
order to improve the accuracy of object detection. We frame the problem as
inference in a Markov Random Field, in which each detection hypothesis scores
object appearance as well as contextual information using Convolutional Neural
Networks, and allows the hypothesis to choose and score a segment out of a
large pool of accurate object segmentation proposals. This enables the detector
to incorporate additional evidence when it is available and thus results in
more accurate detections. Our experiments show an improvement of 4.1% in mAP
over the R-CNN baseline on PASCAL VOC 2010, and 3.4% over the current
state-of-the-art, demonstrating the power of our approach
Holistic, Instance-Level Human Parsing
Object parsing -- the task of decomposing an object into its semantic parts
-- has traditionally been formulated as a category-level segmentation problem.
Consequently, when there are multiple objects in an image, current methods
cannot count the number of objects in the scene, nor can they determine which
part belongs to which object. We address this problem by segmenting the parts
of objects at an instance-level, such that each pixel in the image is assigned
a part label, as well as the identity of the object it belongs to. Moreover, we
show how this approach benefits us in obtaining segmentations at coarser
granularities as well. Our proposed network is trained end-to-end given
detections, and begins with a category-level segmentation module. Thereafter, a
differentiable Conditional Random Field, defined over a variable number of
instances for every input image, reasons about the identity of each part by
associating it with a human detection. In contrast to other approaches, our
method can handle the varying number of people in each image and our holistic
network produces state-of-the-art results in instance-level part and human
segmentation, together with competitive results in category-level part
segmentation, all achieved by a single forward-pass through our neural network.Comment: Poster at BMVC 201
Conditional Random Fields as Recurrent Neural Networks
Pixel-level labelling tasks, such as semantic segmentation, play a central
role in image understanding. Recent approaches have attempted to harness the
capabilities of deep learning techniques for image recognition to tackle
pixel-level labelling tasks. One central issue in this methodology is the
limited capacity of deep learning techniques to delineate visual objects. To
solve this problem, we introduce a new form of convolutional neural network
that combines the strengths of Convolutional Neural Networks (CNNs) and
Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To
this end, we formulate mean-field approximate inference for the Conditional
Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks.
This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a
deep network that has desirable properties of both CNNs and CRFs. Importantly,
our system fully integrates CRF modelling with CNNs, making it possible to
train the whole deep network end-to-end with the usual back-propagation
algorithm, avoiding offline post-processing methods for object delineation. We
apply the proposed method to the problem of semantic image segmentation,
obtaining top results on the challenging Pascal VOC 2012 segmentation
benchmark.Comment: This paper is published in IEEE ICCV 201
Object detection via a multi-region & semantic segmentation-aware CNN model
We propose an object detection system that relies on a multi-region deep
convolutional neural network (CNN) that also encodes semantic
segmentation-aware features. The resulting CNN-based representation aims at
capturing a diverse set of discriminative appearance factors and exhibits
localization sensitivity that is essential for accurate object localization. We
exploit the above properties of our recognition module by integrating it on an
iterative localization mechanism that alternates between scoring a box proposal
and refining its location with a deep CNN regression model. Thanks to the
efficient use of our modules, we detect objects with very high localization
accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we
achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published
work by a significant margin.Comment: Extended technical report -- short version to appear at ICCV 201
Hypercolumns for Object Segmentation and Fine-grained Localization
Recognition algorithms based on convolutional networks (CNNs) typically use
the output of the last layer as feature representation. However, the
information in this layer may be too coarse to allow precise localization. On
the contrary, earlier layers may be precise in localization but will not
capture semantics. To get the best of both worlds, we define the hypercolumn at
a pixel as the vector of activations of all CNN units above that pixel. Using
hypercolumns as pixel descriptors, we show results on three fine-grained
localization tasks: simultaneous detection and segmentation[22], where we
improve state-of-the-art from 49.7[22] mean AP^r to 60.0, keypoint
localization, where we get a 3.3 point boost over[20] and part labeling, where
we show a 6.6 point gain over a strong baseline.Comment: CVPR Camera read