1,313 research outputs found
Saliency-guided Adaptive Seeding for Supervoxel Segmentation
We propose a new saliency-guided method for generating supervoxels in 3D
space. Rather than using an evenly distributed spatial seeding procedure, our
method uses visual saliency to guide the process of supervoxel generation. This
results in densely distributed, small, and precise supervoxels in salient
regions which often contain objects, and larger supervoxels in less salient
regions that often correspond to background. Our approach largely improves the
quality of the resulting supervoxel segmentation in terms of boundary recall
and under-segmentation error on publicly available benchmarks.Comment: 6 pages, accepted to IROS201
GASP : Geometric Association with Surface Patches
A fundamental challenge to sensory processing tasks in perception and
robotics is the problem of obtaining data associations across views. We present
a robust solution for ascertaining potentially dense surface patch (superpixel)
associations, requiring just range information. Our approach involves
decomposition of a view into regularized surface patches. We represent them as
sequences expressing geometry invariantly over their superpixel neighborhoods,
as uniquely consistent partial orderings. We match these representations
through an optimal sequence comparison metric based on the Damerau-Levenshtein
distance - enabling robust association with quadratic complexity (in contrast
to hitherto employed joint matching formulations which are NP-complete). The
approach is able to perform under wide baselines, heavy rotations, partial
overlaps, significant occlusions and sensor noise.
The technique does not require any priors -- motion or otherwise, and does
not make restrictive assumptions on scene structure and sensor movement. It
does not require appearance -- is hence more widely applicable than appearance
reliant methods, and invulnerable to related ambiguities such as textureless or
aliased content. We present promising qualitative and quantitative results
under diverse settings, along with comparatives with popular approaches based
on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201
- …