24,969 research outputs found
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
GOGMA: Globally-Optimal Gaussian Mixture Alignment
Gaussian mixture alignment is a family of approaches that are frequently used
for robustly solving the point-set registration problem. However, since they
use local optimisation, they are susceptible to local minima and can only
guarantee local optimality. Consequently, their accuracy is strongly dependent
on the quality of the initialisation. This paper presents the first
globally-optimal solution to the 3D rigid Gaussian mixture alignment problem
under the L2 distance between mixtures. The algorithm, named GOGMA, employs a
branch-and-bound approach to search the space of 3D rigid motions SE(3),
guaranteeing global optimality regardless of the initialisation. The geometry
of SE(3) was used to find novel upper and lower bounds for the objective
function and local optimisation was integrated into the scheme to accelerate
convergence without voiding the optimality guarantee. The evaluation
empirically supported the optimality proof and showed that the method performed
much more robustly on two challenging datasets than an existing
globally-optimal registration solution.Comment: Manuscript in press 2016 IEEE Conference on Computer Vision and
Pattern Recognitio
3D Face Recognition using Significant Point based SULD Descriptor
In this work, we present a new 3D face recognition method based on Speeded-Up
Local Descriptor (SULD) of significant points extracted from the range images
of faces. The proposed model consists of a method for extracting distinctive
invariant features from range images of faces that can be used to perform
reliable matching between different poses of range images of faces. For a given
3D face scan, range images are computed and the potential interest points are
identified by searching at all scales. Based on the stability of the interest
point, significant points are extracted. For each significant point we compute
the SULD descriptor which consists of vector made of values from the convolved
Haar wavelet responses located on concentric circles centred on the significant
point, and where the amount of Gaussian smoothing is proportional to the radii
of the circles. Experimental results show that the newly proposed method
provides higher recognition rate compared to other existing contemporary models
developed for 3D face recognition
GASP : Geometric Association with Surface Patches
A fundamental challenge to sensory processing tasks in perception and
robotics is the problem of obtaining data associations across views. We present
a robust solution for ascertaining potentially dense surface patch (superpixel)
associations, requiring just range information. Our approach involves
decomposition of a view into regularized surface patches. We represent them as
sequences expressing geometry invariantly over their superpixel neighborhoods,
as uniquely consistent partial orderings. We match these representations
through an optimal sequence comparison metric based on the Damerau-Levenshtein
distance - enabling robust association with quadratic complexity (in contrast
to hitherto employed joint matching formulations which are NP-complete). The
approach is able to perform under wide baselines, heavy rotations, partial
overlaps, significant occlusions and sensor noise.
The technique does not require any priors -- motion or otherwise, and does
not make restrictive assumptions on scene structure and sensor movement. It
does not require appearance -- is hence more widely applicable than appearance
reliant methods, and invulnerable to related ambiguities such as textureless or
aliased content. We present promising qualitative and quantitative results
under diverse settings, along with comparatives with popular approaches based
on range as well as RGB-D data.Comment: International Conference on 3D Vision, 201
- …