2,823 research outputs found
Recurrent Pixel Embedding for Instance Grouping
We introduce a differentiable, end-to-end trainable framework for solving
pixel-level grouping problems such as instance segmentation consisting of two
novel components. First, we regress pixels into a hyper-spherical embedding
space so that pixels from the same group have high cosine similarity while
those from different groups have similarity below a specified margin. We
analyze the choice of embedding dimension and margin, relating them to
theoretical results on the problem of distributing points uniformly on the
sphere. Second, to group instances, we utilize a variant of mean-shift
clustering, implemented as a recurrent neural network parameterized by kernel
bandwidth. This recurrent grouping module is differentiable, enjoys convergent
dynamics and probabilistic interpretability. Backpropagating the group-weighted
loss through this module allows learning to focus on only correcting embedding
errors that won't be resolved during subsequent clustering. Our framework,
while conceptually simple and theoretically abundant, is also practically
effective and computationally efficient. We demonstrate substantial
improvements over state-of-the-art instance segmentation for object proposal
generation, as well as demonstrating the benefits of grouping loss for
classification tasks such as boundary detection and semantic segmentation
Recurrent Scene Parsing with Perspective Understanding in the Loop
Objects may appear at arbitrary scales in perspective images of a scene,
posing a challenge for recognition systems that process images at a fixed
resolution. We propose a depth-aware gating module that adaptively selects the
pooling field size in a convolutional network architecture according to the
object scale (inversely proportional to the depth) so that small details are
preserved for distant objects while larger receptive fields are used for those
nearby. The depth gating signal is provided by stereo disparity or estimated
directly from monocular input. We integrate this depth-aware gating into a
recurrent convolutional neural network to perform semantic segmentation. Our
recurrent module iteratively refines the segmentation results, leveraging the
depth and semantic predictions from the previous iterations.
Through extensive experiments on four popular large-scale RGB-D datasets, we
demonstrate this approach achieves competitive semantic segmentation
performance with a model which is substantially more compact. We carry out
extensive analysis of this architecture including variants that operate on
monocular RGB but use depth as side-information during training, unsupervised
gating as a generic attentional mechanism, and multi-resolution gating. We find
that gated pooling for joint semantic segmentation and depth yields
state-of-the-art results for quantitative monocular depth estimation
Spatially Aware Dictionary Learning and Coding for Fossil Pollen Identification
We propose a robust approach for performing automatic species-level
recognition of fossil pollen grains in microscopy images that exploits both
global shape and local texture characteristics in a patch-based matching
methodology. We introduce a novel criteria for selecting meaningful and
discriminative exemplar patches. We optimize this function during training
using a greedy submodular function optimization framework that gives a
near-optimal solution with bounded approximation error. We use these selected
exemplars as a dictionary basis and propose a spatially-aware sparse coding
method to match testing images for identification while maintaining global
shape correspondence. To accelerate the coding process for fast matching, we
introduce a relaxed form that uses spatially-aware soft-thresholding during
coding. Finally, we carry out an experimental study that demonstrates the
effectiveness and efficiency of our exemplar selection and classification
mechanisms, achieving accuracy on a difficult fine-grained species
classification task distinguishing three types of fossil spruce pollen.Comment: CVMI 201
- …