873 research outputs found

    Fast 3-D fingertip reconstruction using a single two-view structured light acquisition

    Get PDF
    Current contactless fingertip recognition systems based on three-dimensional finger models mostly use multiple views (N gt;; 2) or structured light illumination with multiple patterns projected over a period of time. In this paper, we present a novel methodology able to obtain a fast and accurate three-dimensional reconstruction of the fingertip by using a single two-view acquisition and a static projected pattern. The acquisition setup is less constrained than the ones proposed in the literature and requires only that the finger is placed according to the depth of focus of the cameras, and in the overlapping field of views. The obtained pairs of images are processed in order to extract the information related to the fingertip and the projected pattern. The projected pattern permits to extract a set of reference points in the two images, which are then matched by using a correlation approach. The information related to a previous calibration of the cameras is then used in order to estimate the finger model, and one input image is wrapped on the resulting three-dimensional model, obtaining a three-dimensional pattern with a limited distortion of the ridges. In order to obtain data that can be treated by traditional algorithms, the obtained three-dimensional models are then unwrapped into bidimensional images. The quality of the unwrapped images is evaluated by using a software designed for contact-based fingerprint images. The obtained results show that the methodology is feasible and a realistic three-dimensional reconstruction can be achieved with few constraints. These results also show that the fingertip models computed by using our approach can be processed by both specific three-dimensional matching algorithms and traditional matching approaches. We also compared the results with the ones obtained without using structured light techniques, showing that the use of a projector achieves a faster and more accurate fingertip reconstruction

    Capturing Hands in Action using Discriminative Salient Points and Physics Simulation

    Full text link
    Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.Comment: Accepted for publication by the International Journal of Computer Vision (IJCV) on 16.02.2016 (submitted on 17.10.14). A combination into a single framework of an ECCV'12 multicamera-RGB and a monocular-RGBD GCPR'14 hand tracking paper with several extensions, additional experiments and detail

    Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects

    Get PDF
    Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow

    Accurate 3D fingerprint virtual environment for biometric technology evaluations and experiment design

    Get PDF
    Three-dimensional models of fingerprints obtained from contactless acquisitions have the advantages of reducing the distortion present in traditional contact-based samples and the effects of dirt on the finger and the sensor surface. Moreover, they permit to use a greater area for the biometric recognition

    Data-guided statistical sparse measurements modeling for compressive sensing

    Get PDF
    Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples

    Structured manifolds for motion production and segmentation : a structured Kernel Regression approach

    Get PDF
    Steffen JF. Structured manifolds for motion production and segmentation : a structured Kernel Regression approach. Bielefeld (Germany): Bielefeld University; 2010

    Quality measurement of unwrapped three-dimensional fingerprints : a neural networks approach

    Get PDF
    Traditional biometric systems based on the fingerprint characteristics acquire the biometric samples using touch-based sensors. Some recent researches are focused on the design of touch-less fingerprint recognition systems based on CCD cameras. Most of these systems compute three-dimensional fingertip models and then apply unwrapping techniques in order to obtain images compatible with biometric methods designed for images captured by touch-based sensors. Unwrapped images can present different problems with respect to the traditional fingerprint images. The most important of them is the presence of deformations of the ridge pattern caused by spikes or badly reconstructed regions in the corresponding three-dimensional models. In this paper, we present a neural-based approach for the quality estimation of images obtained from the unwrapping of three-dimensional fingertip models. The paper also presents different sets of features that can be used to evaluate the quality of fingerprint images. Experimental results show that the proposed quality estimation method has an adequate accuracy for the quality classification. The performances of the proposed method are also evaluated in a complete biometric system and compared with the ones obtained by a well-known algorithm in the literature, obtaining satisfactory results

    Toward unconstrained fingerprint recognition : a fully touchless 3-D system based on two views on the move

    Get PDF
    Touchless fingerprint recognition systems do not require contact of the finger with any acquisition surface and thus provide an increased level of hygiene, usability, and user acceptability of fingerprint-based biometric technologies. The most accurate touchless approaches compute 3-D models of the fingertip. However, a relevant drawback of these systems is that they usually require constrained and highly cooperative acquisition methods. We present a novel, fully touchless fingerprint recognition system based on the computation of 3-D models. It adopts an innovative and less-constrained acquisition setup compared with other previously reported 3-D systems, does not require contact with any surface or a finger placement guide, and simultaneously captures multiple images while the finger is moving. To compensate for possible differences in finger placement, we propose novel algorithms for computing 3-D models of the shape of a finger. Moreover, we present a new matching strategy based on the computation of multiple touch-compatible images. We evaluated different aspects of the biometric system: acceptability, usability, recognition performance, robustness to environmental conditions and finger misplacements, and compatibility and interoperability with touch-based technologies. The proposed system proved to be more acceptable and usable than touch-based techniques. Moreover, the system displayed satisfactory accuracy, achieving an equal error rate of 0.06% on a dataset of 2368 samples acquired in a single session and 0.22% on a dataset of 2368 samples acquired over the course of one year. The system was also robust to environmental conditions and to a wide range of finger rotations. The compatibility and interoperability with touch-based technologies was greater or comparable to those reported in public tests using commercial touchless devices

    3D data fusion from multiple sensors and its applications

    Get PDF
    The introduction of depth cameras in the mass market contributed to make computer vision applicable to many real world applications, such as human interaction in virtual environments, autonomous driving, robotics and 3D reconstruction. All these problems were originally tackled by means of standard cameras, but the intrinsic ambiguity in the bidimensional images led to the development of depth cameras technologies. Stereo vision was first introduced to provide an estimate of the 3D geometry of the scene. Structured light depth cameras were developed to use the same concepts of stereo vision but overcome some of the problems of passive technologies. Finally, Time-of-Flight (ToF) depth cameras solve the same depth estimation problem by using a different technology. This thesis focuses on the acquisition of depth data from multiple sensors and presents techniques to efficiently combine the information of different acquisition systems. The three main technologies developed to provide depth estimation are first reviewed, presenting operating principles and practical issues of each family of sensors. The use of multiple sensors then is investigated, providing practical solutions to the problem of 3D reconstruction and gesture recognition. Data from stereo vision systems and ToF depth cameras are combined together to provide a higher quality depth map. A confidence measure of depth data from the two systems is used to guide the depth data fusion. The lack of datasets with data from multiple sensors is addressed by proposing a system for the collection of data and ground truth depth, and a tool to generate synthetic data from standard cameras and ToF depth cameras. For gesture recognition, a depth camera is paired with a Leap Motion device to boost the performance of the recognition task. A set of features from the two devices is used in a classification framework based on Support Vector Machines and Random Forests
    • …
    corecore