486 research outputs found
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/code
Latent-to-full palmprint comparison based on radial triangulation under forensic conditions
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. R. Wang, D. Ramos, J. Fiérrez, "Latent-to-full palmprint comparison based on radial triangulation under forensic conditions" in International Joint Conference on Biometrics (IJCB), Washington, D.C. (USA), 2011, 1 - 6.In forensic applications the evidential value of palmprints is obvious according to surveys of law enforcement agencies which indicate that 30 percent of the latents recovered from crime scenes are from palms. Consequently, developing forensic automatic palmprint identification technology is an urgent and challenging task which deals with latent (i.e., partial) and full palmprints captured or recovered at 500 ppi at least (the current standard in forensic applications) for minutiae-based offline recognition. Moreover, a rigorous quantification of the evidential value of biometrics, such as fingerprints and palmprints, is essential in modern forensic science. Recently, radial triangulation has been proposed as a step towards this objective in fingerprints, using minutiae manually extracted by experts. In this work we help in automatizing such comparison strategy, and generalize it to palmprints. Firstly, palmprint segmentation and enhancement are implemented for full prints feature extraction by a commercial biometric SDK in an automatic way, while features of latent prints are manually extracted by forensic experts. Then a latent-to-full palmprint comparison algorithm based on radial triangulation is proposed, in which radial triangulation is utilized for minutiae modeling. Finally, 22 latent palmprints from real forensic cases and 8680 full palmprints from criminal investigation field are used for performance evaluation. Experimental results proof the usability and efficiency of the proposed system, i.e, rank-l identification rate of 62% is achieved despite the inherent difficulty of latent-to-full palmprint comparison.The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007- 2013) under grant agreement number 23880
UBSegNet: Unified Biometric Region of Interest Segmentation Network
Digital human identity management, can now be seen as a social necessity, as
it is essentially required in almost every public sector such as, financial
inclusions, security, banking, social networking e.t.c. Hence, in today's
rampantly emerging world with so many adversarial entities, relying on a single
biometric trait is being too optimistic. In this paper, we have proposed a
novel end-to-end, Unified Biometric ROI Segmentation Network (UBSegNet), for
extracting region of interest from five different biometric traits viz. face,
iris, palm, knuckle and 4-slap fingerprint. The architecture of the proposed
UBSegNet consists of two stages: (i) Trait classification and (ii) Trait
localization. For these stages, we have used a state of the art region based
convolutional neural network (RCNN), comprising of three major parts namely
convolutional layers, region proposal network (RPN) along with classification
and regression heads. The model has been evaluated over various huge publicly
available biometric databases. To the best of our knowledge this is the first
unified architecture proposed, segmenting multiple biometric traits. It has
been tested over around 5000 * 5 = 25,000 images (5000 images per trait) and
produces very good results. Our work on unified biometric segmentation, opens
up the vast opportunities in the field of multiple biometric traits based
authentication systems.Comment: 4th Asian Conference on Pattern Recognition (ACPR 2017
Dual-tree Complex Wavelet Transform based Local Binary Pattern Weighted Histogram Method for Palmprint Recognition
In the paper, we improve the Local Binary Pattern Histogram (LBPH) approach and combine it with Dual-Tree Complex Wavelet Transform (DT-CWT) to propose a Dual-Tree Complex Wavelet Transform based Local Binary Pattern Weighted Histogram (DT-CWT based LBPWH) method for palmprint representation and recognition. The approximate shift invariant property of the DT-CWT and its good directional selectively in 2D make it a very appealing choice for palmprint representation. LBPH is a powerful texture description method, which considers both shape and texture information to represent an image. To enhance the representation capability of LBPH, a weight set is computed and assigned to the finial feature histogram. Here we needn't construct a palmprint model by a train sample set, which is not like some methods based on subspace discriminant analysis or statistical learning. In the approach, a palmprint image is first decomposed into multiple subbands by using DT-CWT. After that, each subband in complex wavelet domain is divided into non-overlapping sub-regions. Then LBPHs are extracted from each sub-region in each subband, and lastly, all of LBPHs are weighted and concatenated into a single feature histogram to effectively represent the palmprint image. A Chi square distance is used to measure the similarity of different feature histograms and the finial recognition is performed by the nearest neighborhood classifier. A group of optimal parameters is chosen by 20 verification tests on our palmprint database. In addition, the recognition results on our palmprint database and the database from the Hong Kong Polytechnic University show the proposed method outperforms other methods
3D Vascular Pattern Extraction from Grayscale Volumetric Ultrasound Images for Biometric Recognition Purposes
Recognition systems based on palm veins are gaining increasing attention as they are highly distinctive and very hard to counterfeit. Most popular systems are based on infrared radiation; they have the merit to be contactless but can provide only 2D patterns. Conversely, 3D patterns can be achieved with Doppler or photoacoustic methods, but these approaches require too long of an acquisition time. In this work, a method for extracting 3D vascular patterns from conventional grayscale volumetric images of the human hand, which can be collected in a short time, is proposed for the first time. It is based on the detection of low-brightness areas in B-mode images. Centroids of these areas in successive B-mode images are then linked through a minimum distance criterion. Preliminary verification and identification results, carried out on a database previously established for extracting 3D palmprint features, demonstrated good recognition performances: EER = 2%, ROC AUC = 99.92%, and an identification rate of 100%. As further merit, 3D vein pattern features can be fused to 3D palmprint features to implement a costless multimodal recognition system
- …