7,142 research outputs found
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution
We propose two strategies to improve the quality of tractography results
computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both
methods are based on the same PDE framework, defined in the coupled space of
positions and orientations, associated with a stochastic process describing the
enhancement of elongated structures while preserving crossing structures. In
the first method we use the enhancement PDE for contextual regularization of a
fiber orientation distribution (FOD) that is obtained on individual voxels from
high angular resolution diffusion imaging (HARDI) data via constrained
spherical deconvolution (CSD). Thereby we improve the FOD as input for
subsequent tractography. Secondly, we introduce the fiber to bundle coherence
(FBC), a measure for quantification of fiber alignment. The FBC is computed
from a tractography result using the same PDE framework and provides a
criterion for removing the spurious fibers. We validate the proposed
combination of CSD and enhancement on phantom data and on human data, acquired
with different scanning protocols. On the phantom data we find that PDE
enhancements improve both local metrics and global metrics of tractography
results, compared to CSD without enhancements. On the human data we show that
the enhancements allow for a better reconstruction of crossing fiber bundles
and they reduce the variability of the tractography output with respect to the
acquisition parameters. Finally, we show that both the enhancement of the FODs
and the use of the FBC measure on the tractography improve the stability with
respect to different stochastic realizations of probabilistic tractography.
This is shown in a clinical application: the reconstruction of the optic
radiation for epilepsy surgery planning
Person re-identification via efficient inference in fully connected CRF
In this paper, we address the problem of person re-identification problem,
i.e., retrieving instances from gallery which are generated by the same person
as the given probe image. This is very challenging because the person's
appearance usually undergoes significant variations due to changes in
illumination, camera angle and view, background clutter, and occlusion over the
camera network. In this paper, we assume that the matched gallery images should
not only be similar to the probe, but also be similar to each other, under
suitable metric. We express this assumption with a fully connected CRF model in
which each node corresponds to a gallery and every pair of nodes are connected
by an edge. A label variable is associated with each node to indicate whether
the corresponding image is from target person. We define unary potential for
each node using existing feature calculation and matching techniques, which
reflect the similarity between probe and gallery image, and define pairwise
potential for each edge in terms of a weighed combination of Gaussian kernels,
which encode appearance similarity between pair of gallery images. The specific
form of pairwise potential allows us to exploit an efficient inference
algorithm to calculate the marginal distribution of each label variable for
this dense connected CRF. We show the superiority of our method by applying it
to public datasets and comparing with the state of the art.Comment: 7 pages, 4 figure
- …