4,239 research outputs found

    A review of finger vein recognition system

    Get PDF
    Recently, the security-based system using finger vein as a biometric trait has been getting more attention from researchers all over the world, and these researchers have achieved positive progress. Many works have been done in different methods to improve the performance and accuracy of the personal identification and verification results. This paper discusses the previous methods of finger vein recognition system which include three main stages: preprocessing, feature extraction and classification. The advantages and limitations of these previous methods are reviewed at the same time we present the main problems of the finger vein recognition system to make it as a future direction in this field

    Multimodal Biometric Systems for Personal Identification and Authentication using Machine and Deep Learning Classifiers

    Get PDF
    Multimodal biometrics, using machine and deep learning, has recently gained interest over single biometric modalities. This interest stems from the fact that this technique improves recognition and, thus, provides more security. In fact, by combining the abilities of single biometrics, the fusion of two or more biometric modalities creates a robust recognition system that is resistant to the flaws of individual modalities. However, the excellent recognition of multimodal systems depends on multiple factors, such as the fusion scheme, fusion technique, feature extraction techniques, and classification method. In machine learning, existing works generally use different algorithms for feature extraction of modalities, which makes the system more complex. On the other hand, deep learning, with its ability to extract features automatically, has made recognition more efficient and accurate. Studies deploying deep learning algorithms in multimodal biometric systems tried to find a good compromise between the false acceptance and the false rejection rates (FAR and FRR) to choose the threshold in the matching step. This manual choice is not optimal and depends on the expertise of the solution designer, hence the need to automatize this step. From this perspective, the second part of this thesis details an end-to-end CNN algorithm with an automatic matching mechanism. This thesis has conducted two studies on face and iris multimodal biometric recognition. The first study proposes a new feature extraction technique for biometric systems based on machine learning. The iris and facial features extraction is performed using the Discrete Wavelet Transform (DWT) combined with the Singular Value Decomposition (SVD). Merging the relevant characteristics of the two modalities is used to create a pattern for an individual in the dataset. The experimental results show the robustness of our proposed technique and the efficiency when using the same feature extraction technique for both modalities. The proposed method outperformed the state-of-the-art and gave an accuracy of 98.90%. The second study proposes a deep learning approach using DensNet121 and FaceNet for iris and faces multimodal recognition using feature-level fusion and a new automatic matching technique. The proposed automatic matching approach does not use the threshold to ensure a better compromise between performance and FAR and FRR errors. However, it uses a trained multilayer perceptron (MLP) model that allows people’s automatic classification into two classes: recognized and unrecognized. This platform ensures an accurate and fully automatic process of multimodal recognition. The results obtained by the DenseNet121-FaceNet model by adopting feature-level fusion and automatic matching are very satisfactory. The proposed deep learning models give 99.78% of accuracy, and 99.56% of precision, with 0.22% of FRR and without FAR errors. The proposed and developed platform solutions in this thesis were tested and vali- dated in two different case studies, the central pharmacy of Al-Asria Eye Clinic in Dubai and the Abu Dhabi Police General Headquarters (Police GHQ). The solution allows fast identification of the persons authorized to access the different rooms. It thus protects the pharmacy against any medication abuse and the red zone in the military zone against the unauthorized use of weapons

    A robust framework for driver fatigue detection from EEG signals using enhancement of modified Z-score and multiple machine learning architectures

    Get PDF
    Physiological signals, such as electroencephalogram (EEG), are used to observe a driver’s brain activities. A portable EEG system provides several advantages, including ease of operation, cost-effectiveness, portability, and few physical restrictions. However, it can be challenging to analyse EEG signals as they often contain various artefacts, including muscle activities, eye blinking, and unwanted noises. This study utilised an independent component analysis (ICA) approach to eliminate such unwanted signals from the unprocessed EEG data of 12 young, physically fit male participants between the ages of 19 and 24 who took part in a driving simulation. Furthermore, driver fatigue state detection was carried out using multichannel EEG signals obtained from O1, O2, Fp1, Fp2, P3, P4, F3, and F4. An enhanced modified z-score was utilised with features extracted from a time-frequency domain continuous wavelet transform (CWT) to elevate the reliability of driver fatigue classification. The proposed methodology offers several advantages. First, multichannel EEG analysis improves the accuracy of sleep stage detection, which is vital for accurate driver fatigue detection. Second, an enhanced modified z-score in feature extraction is more robust than conventional z-score techniques, making it more effective for removing outlier values and improving classification accuracy. Third, the proposed approach for detecting driver fatigue employs multiple machine learning classifiers, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Artificial Neural Networks (ANNs) that utilise Long Short-Term Memory (LSTM), and also machine learning techniques like Support Vector Machines (SVM). The evaluation of five classifiers was performed through 5-fold cross-validation. The outcomes indicate that the suggested framework attains exceptional precision in identifying driver fatigue, with an average accuracy rate of 96.07%. Among the classifiers, the ANN classifier achieved the most significant precision of 99.65%, and the SVM classifier ranked second with an accuracy of 97.89%. Based on the results of the receiver operating characteristic (ROC) and area under the curve (AUC) analysis, it was observed that all the classifiers had an outstanding performance, with an average AUC value of 0.95. This study’s contribution lies in presenting a comprehensive and effective framework that can accurately detect driver fatigue from EEG signals
    • …
    corecore