47 research outputs found

    Palmprint Gender Classification Using Deep Learning Methods

    Get PDF
    Gender identification is an important technique that can improve the performance of authentication systems by reducing searching space and speeding up the matching process. Several biometric traits have been used to ascertain human gender. Among them, the human palmprint possesses several discriminating features such as principal-lines, wrinkles, ridges, and minutiae features and that offer cues for gender identification. The goal of this work is to develop novel deep-learning techniques to determine gender from palmprint images. PolyU and CASIA palmprint databases with 90,000 and 5502 images respectively were used for training and testing purposes in this research. After ROI extraction and data augmentation were performed, various convolutional and deep learning-based classification approaches were empirically designed, optimized, and tested. Results of gender classification as high as 94.87% were achieved on the PolyU palmprint database and 90.70% accuracy on the CASIA palmprint database. Optimal performance was achieved by combining two different pre-trained and fine-tuned deep CNNs (VGGNet and DenseNet) through score level average fusion. In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) was also implemented to ascertain which specific regions of the palmprint are most discriminative for gender classification

    Deep learning approach for Touchless Palmprint Recognition based on Alexnet and Fuzzy Support Vector Machine

    Get PDF
    Due to stable and discriminative features, palmprint-based biometrics has been gaining popularity in recent years. Most of the traditional palmprint recognition systems are designed with a group of hand-crafted features that ignores some additional features. For tackling the problem described above, a Convolution Neural Network (CNN) model inspired by Alex-net that learns the features from the ROI images and classifies using a fuzzy support vector machine is proposed. The output of the CNN is fed as input to the fuzzy Support vector machine. The CNN\u27s receptive field aids in extracting the most discriminative features from the palmprint images, and Fuzzy SVM results in a robust classification. The experiments are conducted on popular contactless datasets such as IITD, POLYU2, Tongji, and CASIA databases. Results demonstrate our approach outperformers several state-of-art techniques for palmprint recognition. Using this approach, we obtain 99.98% testing accuracy for the Tongji dataset and 99.76 % for the POLYU-II datasets

    An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements

    Full text link
    Spectral imaging has recently gained traction for face recognition in biometric systems. We investigate the merits of spectral imaging for face recognition and the current challenges that hamper the widespread deployment of spectral sensors for face recognition. The reliability of conventional face recognition systems operating in the visible range is compromised by illumination changes, pose variations and spoof attacks. Recent works have reaped the benefits of spectral imaging to counter these limitations in surveillance activities (defence, airport security checks, etc.). However, the implementation of this technology for biometrics, is still in its infancy due to multiple reasons. We present an overview of the existing work in the domain of spectral imaging for face recognition, different types of modalities and their assessment, availability of public databases for sake of reproducible research as well as evaluation of algorithms, and recent advancements in the field, such as, the use of deep learning-based methods for recognizing faces from spectral images

    Signal processing and machine learning techniques for human verification based on finger textures

    Get PDF
    PhD ThesisIn recent years, Finger Textures (FTs) have attracted considerable attention as potential biometric characteristics. They can provide robust recognition performance as they have various human-speci c features, such as wrinkles and apparent lines distributed along the inner surface of all ngers. The main topic of this thesis is verifying people according to their unique FT patterns by exploiting signal processing and machine learning techniques. A Robust Finger Segmentation (RFS) method is rst proposed to isolate nger images from a hand area. It is able to detect the ngers as objects from a hand image. An e cient adaptive nger segmentation method is also suggested to address the problem of alignment variations in the hand image called the Adaptive and Robust Finger Segmentation (ARFS) method. A new Multi-scale Sobel Angles Local Binary Pattern (MSALBP) feature extraction method is proposed which combines the Sobel direction angles with the Multi-Scale Local Binary Pattern (MSLBP). Moreover, an enhanced method called the Enhanced Local Line Binary Pattern (ELLBP) is designed to e ciently analyse the FT patterns. As a result, a powerful human veri cation scheme based on nger Feature Level Fusion with a Probabilistic Neural Network (FLFPNN) is proposed. A multi-object fusion method, termed the Finger Contribution Fusion Neural Network (FCFNN), combines the contribution scores of the nger objects. The veri cation performances are examined in the case of missing FT areas. Consequently, to overcome nger regions which are poorly imaged a method is suggested to salvage missing FT elements by exploiting the information embedded within the trained Probabilistic Neural Network (PNN). Finally, a novel method to produce a Receiver Operating Characteristic (ROC) curve from a PNN is suggested. Furthermore, additional development to this method is applied to generate the ROC graph from the FCFNN. Three databases are employed for evaluation: The Hong Kong Polytechnic University Contact-free 3D/2D (PolyU3D2D), Indian Institute of Technology (IIT) Delhi and Spectral 460nm (S460) from the CASIA Multi-Spectral (CASIAMS) databases. Comparative simulation studies con rm the e ciency of the proposed methods for human veri cation. The main advantage of both segmentation approaches, the RFS and ARFS, is that they can collect all the FT features. The best results have been benchmarked for the ELLBP feature extraction with the FCFNN, where the best Equal Error Rate (EER) values for the three databases PolyU3D2D, IIT Delhi and CASIAMS (S460) have been achieved 0.11%, 1.35% and 0%, respectively. The proposed salvage approach for the missing feature elements has the capability to enhance the veri cation performance for the FLFPNN. Moreover, ROC graphs have been successively established from the PNN and FCFNN.the ministry of higher education and scientific research in Iraq (MOHESR); the Technical college of Mosul; the Iraqi Cultural Attach e; the active people in the MOHESR, who strongly supported Iraqi students

    Personal verification based on multi-spectral finger texture lighting images

    Get PDF
    Finger Texture (FT) images acquired from different spectral lighting sensors reveal various features. This inspires the idea of establishing a recognition model between FT features collected using two different spectral lighting forms to provide high recognition performance. This can be implemented by establishing an efficient feature extraction and effective classifier, which can be applied to different FT patterns. So, an effective feature extraction method called the Surrounded Patterns Code (SPC) is adopted. This method can collect the surrounded patterns around the main FT features. It is believed that these patterns are robust and valuable. The SPC approach proposes using a single texture descriptor for FT images captured under multispectral illuminations, where this reduces the cost of employing different feature extraction methods for different spectral FT images. Furthermore, a novel classifier termed the Re-enforced Probabilistic Neural Network (RPNN) is proposed. It enhances the capability of the standard Probabilistic Neural Network (PNN) and provides better recognition performance. Two types of FT images from the Multi-Spectral CASIA (MSCASIA) database were employed as two types of spectral sensors were used in the acquiring device: the White (WHT) light and spectral 460 nm of Blue (BLU) light. Supporting comparisons were performed, analysed and discussed. The best results were recorded for the SPC by enhancing the Equal Error Rates (EERs) at 4% for spectral BLU and 2% for spectral WHT. These percentages have been reduced to 0% after utilizing the RPNN

    Sistema biométrico multimodal para verificação da identidade baseado na geometria da mão, na impressão palmar e nas veias da palma da mão

    Get PDF
    Orientadora : Profª Drª Giselle Lopes Ferrari RonqueCo-orientador : Prof. Dr. Alessandro ZimmerDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia Elétrica. Defesa: Curitiba, 29/04/2015Inclui referênciasResumo: A biometria tem sido bastante utilizada para realizar a identificação pessoal, pois trata-se de um método seguro de identificação, utilizando características que são únicas, intransferíveis e capazes de discriminar os indivíduos. Este trabalho propõe um método biométrico multimodal unindo as características extraídas da geometria da mão, da impressão palmar e das veias da palma da mão, nunca antes realizado para o banco de imagens utilizado. Para a geometria da mão extraiu-se medidas do contorno utilizando o método DOS+ responsável por identificar o grau de curvatura do mesmo. Primitivas locais (direção preferencial e quantidade e proporção de pixels) e globais (textura e localização do centro de massa) foram extraídas da impressão palmar. E por fim, características de textura foram extraídas das veias da palma da mão através do descritor Local Binary Patterns. A fusão das biometrias foi feita em nível de características e a classificação foi realizada através de Máquinas de Vetores de Suporte. Utilizou-se o banco CASIA-MS-Palmprint V1.0 para realizar o desenvolvimento e os testes do sistema. Um segundo banco de dados também foi utilizado para os testes e para a validação da metodologia. Para o banco CASIA foi obtida uma taxa de erros iguais de 2,4% para a combinação da geometria da mão com a impressão palmar, de 2% para a fusão da impressão palmar com as veias da palma e de 1,4% para a combinação da geometria da palma, da impressão palmar e das veias da palma Palavras-chave: biometria, geometria da mão, impressão palmar, veias da palma da mão, sistema biométrico multimodal, identificação pessoal.Abstract: Biometrics has been largely used to personal identification because it is a safe method of identification using characteristics that are unique, non-transferable and capable of discriminate people. This work presents a multimodal biometric method joining the extracted characteristics of hand geometry, palmprint and palm vein, which was never made before for the database used. In order to have the hand geometry, the contour curvature degree was extracted with the DOS+ method. Local primitives (preferential direction and pixels quantity and proportion) and global primitives (texture and center of mass location) were extracted from the palmprint. Finally, characteristics of texture also were extracted from the palm veins through Local Binary Patterns descriptor. Biometric fusion was made in the feature level and classification was made by Support Vector Machines. The CASIA-MS-Palmprint V1.0 database was used to develop and test the system. A second database was also used to test and validate the methodology. CASIA's database Equal Error Rate was 2.4% for hand geometry and palmprint combination, 2% for palmprint and palm veins combination and 1.4% for hand geometry, palmprint and palm veins combination. Key-words: biometrics, hand geometry, palmprint, palm vein, multimodal biometric system, personal identification

    A Hybrid Feature Extraction Method With Regularized Extreme Learning Machine for Brain Tumor Classification

    Get PDF
    Brain cancer classification is an important step that depends on the physician's knowledge and experience. An automated tumor classification system is very essential to support radiologists and physicians to identify brain tumors. However, the accuracy of current systems needs to be improved for suitable treatments. In this paper, we propose a hybrid feature extraction method with a regularized extreme learning machine (RELM) for developing an accurate brain tumor classification approach. The approach starts by preprocessing the brain images by using a min–max normalization rule to enhance the contrast of brain edges and regions. Then, the brain tumor features are extracted based on a hybrid method of feature extraction. Finally, a RELM is used for classifying the type of brain tumor. To evaluate and compare the proposed approach, a set of experiments is conducted on a new public dataset of brain images. The experimental results proved that the approach is more effective compared with the existing state-of-the-art approaches, and the performance in terms of classification accuracy improved from 91.51% to 94.233% for the experiment of the random holdout technique

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Feature extraction using two dimensional (2D) legendre wavelet filter for partial iris recognition

    Get PDF
    An increasing need for biometrics recognition systems has grown substantially to address the issues of recognition and identification, especially in highly dense areas such as airports, train stations, and financial transactions. Evidence of these can be seen in some airports and also the implementation of these technologies in our mobile phones. Among the most popular biometric technologies include facial, fingerprints, and iris recognition. The iris recognition is considered by many researchers to be the most accurate and reliable form of biometric recognition because iris can neither be surgically operated with a chance of losing slight nor change due to aging. However, presently most iris recognition systems available can only recognize iris image with frontal-looking and high-quality images. Angular image and partially capture image cannot be authenticated with the existing method of iris recognition. This research investigates the possibility of developing a technique for recognition partially captured iris image. The technique is designed to process the iris image at 50%, 25%, 16.5%, and 12.5% and to find a threshold for a minimum amount of iris region required to authenticate the individual. The research also developed and implemented two Dimensional (2D) Legendre wavelet filter for the iris feature extraction. The Legendre wavelet filter is to enhance the feature extraction technique. Selected iris images from CASIA, UBIRIS, and MMU database were used to test the accuracy of the introduced technique. The technique was able to produce recognition accuracy between 70 – 90% CASIA-interval with 92.25% accuracy, CASIA-distance with 86.25%, UBIRIS with 74.95%, and MMU with 94.45%

    Learnable Reconstruction Methods from RGB Images to Hyperspectral Imaging: A Survey

    Full text link
    Hyperspectral imaging enables versatile applications due to its competence in capturing abundant spatial and spectral information, which are crucial for identifying substances. However, the devices for acquiring hyperspectral images are expensive and complicated. Therefore, many alternative spectral imaging methods have been proposed by directly reconstructing the hyperspectral information from lower-cost, more available RGB images. We present a thorough investigation of these state-of-the-art spectral reconstruction methods from the widespread RGB images. A systematic study and comparison of more than 25 methods has revealed that most of the data-driven deep learning methods are superior to prior-based methods in terms of reconstruction accuracy and quality despite lower speeds. This comprehensive review can serve as a fruitful reference source for peer researchers, thus further inspiring future development directions in related domains
    corecore