155 research outputs found
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/code
Deep learning approach for Touchless Palmprint Recognition based on Alexnet and Fuzzy Support Vector Machine
Due to stable and discriminative features, palmprint-based biometrics has been gaining popularity in recent years. Most of the traditional palmprint recognition systems are designed with a group of hand-crafted features that ignores some additional features. For tackling the problem described above, a Convolution Neural Network (CNN) model inspired by Alex-net that learns the features from the ROI images and classifies using a fuzzy support vector machine is proposed. The output of the CNN is fed as input to the fuzzy Support vector machine. The CNN\u27s receptive field aids in extracting the most discriminative features from the palmprint images, and Fuzzy SVM results in a robust classification. The experiments are conducted on popular contactless datasets such as IITD, POLYU2, Tongji, and CASIA databases. Results demonstrate our approach outperformers several state-of-art techniques for palmprint recognition. Using this approach, we obtain 99.98% testing accuracy for the Tongji dataset and 99.76 % for the POLYU-II datasets
Enhanced CNN for image denoising
Owing to flexible architectures of deep convolutional neural networks (CNNs),
CNNs are successfully used for image denoising. However, they suffer from the
following drawbacks: (i) deep network architecture is very difficult to train.
(ii) Deeper networks face the challenge of performance saturation. In this
study, the authors propose a novel method called enhanced convolutional neural
denoising network (ECNDNet). Specifically, they use residual learning and batch
normalisation techniques to address the problem of training difficulties and
accelerate the convergence of the network. In addition, dilated convolutions
are used in the proposed network to enlarge the context information and reduce
the computational cost. Extensive experiments demonstrate that the ECNDNet
outperforms the state-of-the-art methods for image denoising.Comment: CAAI Transactions on Intelligence Technology[J], 201
Invariant Scattering Transform for Medical Imaging
Invariant scattering transform introduces new area of research that merges
the signal processing with deep learning for computer vision. Nowadays, Deep
Learning algorithms are able to solve a variety of problems in medical sector.
Medical images are used to detect diseases brain cancer or tumor, Alzheimer's
disease, breast cancer, Parkinson's disease and many others. During pandemic
back in 2020, machine learning and deep learning has played a critical role to
detect COVID-19 which included mutation analysis, prediction, diagnosis and
decision making. Medical images like X-ray, MRI known as magnetic resonance
imaging, CT scans are used for detecting diseases. There is another method in
deep learning for medical imaging which is scattering transform. It builds
useful signal representation for image classification. It is a wavelet
technique; which is impactful for medical image classification problems. This
research article discusses scattering transform as the efficient system for
medical image analysis where it's figured by scattering the signal information
implemented in a deep convolutional network. A step by step case study is
manifested at this research work.Comment: 11 pages, 8 figures and 1 tabl
An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements
Spectral imaging has recently gained traction for face recognition in
biometric systems. We investigate the merits of spectral imaging for face
recognition and the current challenges that hamper the widespread deployment of
spectral sensors for face recognition. The reliability of conventional face
recognition systems operating in the visible range is compromised by
illumination changes, pose variations and spoof attacks. Recent works have
reaped the benefits of spectral imaging to counter these limitations in
surveillance activities (defence, airport security checks, etc.). However, the
implementation of this technology for biometrics, is still in its infancy due
to multiple reasons. We present an overview of the existing work in the domain
of spectral imaging for face recognition, different types of modalities and
their assessment, availability of public databases for sake of reproducible
research as well as evaluation of algorithms, and recent advancements in the
field, such as, the use of deep learning-based methods for recognizing faces
from spectral images
Feature extraction and information fusion in face and palmprint multimodal biometrics
ThesisMultimodal biometric systems that integrate the biometric traits from several
modalities are able to overcome the limitations of single modal biometrics. Fusing
the information at an earlier level by consolidating the features given by different
traits can give a better result due to the richness of information at this stage. In this
thesis, three novel methods are derived and implemented on face and palmprint
modalities, taking advantage of the multimodal biometric fusion at feature level.
The benefits of the proposed method are the enhanced capabilities in discriminating
information in the fused features and capturing all of the information required to
improve the classification performance. Multimodal biometric proposed here
consists of several stages such as feature extraction, fusion, recognition and
classification.
Feature extraction gathers all important information from the raw images. A
new local feature extraction method has been designed to extract information from
the face and palmprint images in the form of sub block windows. Multiresolution
analysis using Gabor transform and DCT is computed for each sub block window to
produce compact local features for the face and palmprint images. Multiresolution
Gabor analysis captures important information in the texture of the images while
DCT represents the information in different frequency components. Important
features with high discrimination power are then preserved by selecting several low
frequency coefficients in order to estimate the model parameters.
The local features extracted are fused in a new matrix interleaved method. The
new fused feature vector is higher in dimensionality compared to the original feature
vectors from both modalities, thus it carries high discriminating power and contains
rich statistical information. The fused feature vector also has larger data points in
the feature space which is advantageous for the training process using statistical
methods. The underlying statistical information in the fused feature vectors is
captured using GMM where several numbers of modal parameters are estimated
from the distribution of fused feature vector.
Maximum likelihood score is used to measure a degree of certainty to perform
recognition while maximum likelihood score normalization is used for classification
process. The use of likelihood score normalization is found to be able to suppress an
imposter likelihood score when the background model parameters are estimated
from a pool of users which include statistical information of an imposter. The
present method achieved the highest recognition accuracy 97% and 99.7% when
tested using FERET-PolyU dataset and ORL-PolyU dataset respectively.Universiti Malaysia Perlis and Ministry of Higher Education
Malaysi
- …