4,857 research outputs found
Human Perambulation as a Self Calibrating Biometric
This paper introduces a novel method of single camera gait reconstruction which is independent of the walking direction and of the camera parameters. Recognizing people by gait has unique advantages with respect to other biometric techniques: the identification of the walking subject is completely unobtrusive and the identification can be achieved at distance. Recently much research has been conducted into the recognition of frontoparallel gait. The proposed method relies on the very nature of walking to achieve the independence from walking direction. Three major assumptions have been done: human gait is cyclic; the distances between the bone joints are invariant during the execution of the movement; and the articulated leg motion is approximately planar, since almost all of the perceived motion is contained within a single limb swing plane. The method has been tested on several subjects walking freely along six different directions in a small enclosed area. The results show that recognition can be achieved without calibration and without dependence on view direction. The obtained results are particularly encouraging for future system development and for its application in real surveillance scenarios
Matterport3D: Learning from RGB-D Data in Indoor Environments
Access to large, diverse RGB-D datasets is critical for training RGB-D scene
understanding algorithms. However, existing datasets still cover only a limited
number of views or a restricted scale of spaces. In this paper, we introduce
Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views
from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided
with surface reconstructions, camera poses, and 2D and 3D semantic
segmentations. The precise global alignment and comprehensive, diverse
panoramic set of views over entire buildings enable a variety of supervised and
self-supervised computer vision tasks, including keypoint matching, view
overlap prediction, normal prediction from color, semantic segmentation, and
region classification
Part-to-whole Registration of Histology and MRI using Shape Elements
Image registration between histology and magnetic resonance imaging (MRI) is
a challenging task due to differences in structural content and contrast. Too
thick and wide specimens cannot be processed all at once and must be cut into
smaller pieces. This dramatically increases the complexity of the problem,
since each piece should be individually and manually pre-aligned. To the best
of our knowledge, no automatic method can reliably locate such piece of tissue
within its respective whole in the MRI slice, and align it without any prior
information. We propose here a novel automatic approach to the joint problem of
multimodal registration between histology and MRI, when only a fraction of
tissue is available from histology. The approach relies on the representation
of images using their level lines so as to reach contrast invariance. Shape
elements obtained via the extraction of bitangents are encoded in a
projective-invariant manner, which permits the identification of common pieces
of curves between two images. We evaluated the approach on human brain
histology and compared resulting alignments against manually annotated ground
truths. Considering the complexity of the brain folding patterns, preliminary
results are promising and suggest the use of characteristic and meaningful
shape elements for improved robustness and efficiency.Comment: Paper accepted at ICCV Workshop (Bio-Image Computing
- …