113 research outputs found

    Integration of biometrics and steganography: A comprehensive review

    Get PDF
    The use of an individual’s biometric characteristics to advance authentication and verification technology beyond the current dependence on passwords has been the subject of extensive research for some time. Since such physical characteristics cannot be hidden from the public eye, the security of digitised biometric data becomes paramount to avoid the risk of substitution or replay attacks. Biometric systems have readily embraced cryptography to encrypt the data extracted from the scanning of anatomical features. Significant amounts of research have also gone into the integration of biometrics with steganography to add a layer to the defence-in-depth security model, and this has the potential to augment both access control parameters and the secure transmission of sensitive biometric data. However, despite these efforts, the amalgamation of biometric and steganographic methods has failed to transition from the research lab into real-world applications. In light of this review of both academic and industry literature, we suggest that future research should focus on identifying an acceptable level steganographic embedding for biometric applications, securing exchange of steganography keys, identifying and address legal implications, and developing industry standards

    Application of Stochastic Diffusion for Hiding High Fidelity Encrypted Images

    Get PDF
    Cryptography coupled with information hiding has received increased attention in recent years and has become a major research theme because of the importance of protecting encrypted information in any Electronic Data Interchange system in a way that is both discrete and covert. One of the essential limitations in any cryptography system is that the encrypted data provides an indication on its importance which arouses suspicion and makes it vulnerable to attack. Information hiding of Steganography provides a potential solution to this issue by making the data imperceptible, the security of the hidden information being a threat only if its existence is detected through Steganalysis. This paper focuses on a study methods for hiding encrypted information, specifically, methods that encrypt data before embedding in host data where the ‘data’ is in the form of a full colour digital image. Such methods provide a greater level of data security especially when the information is to be submitted over the Internet, for example, since a potential attacker needs to first detect, then extract and then decrypt the embedded data in order to recover the original information. After providing an extensive survey of the current methods available, we present a new method of encrypting and then hiding full colour images in three full colour host images with out loss of fidelity following data extraction and decryption. The application of this technique, which is based on a technique called ‘Stochastic Diffusion’ are wide ranging and include covert image information interchange, digital image authentication, video authentication, copyright protection and digital rights management of image data in general

    A New Palm Print Recognition Approach by Using PCA & Gabor Filter

    Get PDF
    The key problems that involve in identification of palm print are searching for the better match from the test sample taken from input and also the available templates in the palm print database. The selection of the features and measuring similarity are 2 basic to be resolved. A feature that has higher discriminating ability should need to show a large variation between samples taken from totally different persons and small variation between samples taken from the palm of same person. Principal lines with information points are consider as very helpful palm print features and are successfully used for the aim of verification. Excluding these features there are many various features present in a palm print like: wrinkle features, geometry features, minutiae features and delta point features. It�s noted that each one of those features of palm are involved with the native attributes supported points or line segments. 2 key points in palm print identification are: first is to develop an efficient algorithm that extracts helpful features and second is to correctly measure the similarity of 2 features sets. In contrast to the existing technique, propose a combine selection technique for identification by using the palm print feature base pattern matching by combining native and global palm print features in some stratified fashion. In this work, use PCA, Gabor Filter and KNN for the aim of classification and matching. This work show palm print authentication system operates in 2 ways in which first is enrolment and the second is verification. In enrolment, a user needs to offer palm print samples many times to the system. The samples is captured with the use of any image capturing device that then pre-processed and so extraction of features is done to provide the templates that keep template database. For verification user is instruct to produce his/her user ID and palm print sample, then the palm print sample are pre-processed and extraction of feature is done to compared it with templates keep within the database that belonging to constant user ID

    Online signature verification using hybrid wavelet transform

    Get PDF
    Online signature verification is a prominent behavioral biometric trait. It offers many dynamic features along with static two dimensional signature image. In this paper, the Hybrid Wavelet Transform (HWT) was generated using Kronecker product of two orthogonal transform such as DCT, DHT, Haar, Hadamard and Kekre. HWT has the ability to analyze the signal at global as well as local level like wavelet transform. HWT-1 and -2 was applied on the first 128 samples of the pressure parameter and first 16 samples of the output were used as feature vector for signature verification. This feature vector is given to Left to Right HMM classifier to identify the genuine and forged signature. For HWT-1, DCT HAAR offers best FAR and FRR. . For HWT-2, KEKRE 128 offers best FAR and FRR. HWT-1 offers better performance than HWT- 2 in terms of FAR and FRR. As the number of states increase, the performance of the system improves. For HWT - 1, KEKRE 128 offers best performance at 275 symbols whereas for HWT - 2, best performance is at 475 symbols by KEKRE 128

    Translation Based Face Recognition Using Fusion of LL and SV Coefficients

    Get PDF
    The face is a physiological trait used to identify a person effectively for various biometric applications. In this paper we propose Translation based Face Recognition using Fusion of LL and SV coefficients. The novel concept of translating many sample images of a single person into one sample per person is introduced. The face database images are preprocessed using Gaussian filter and DWT to generate LL coefficients. The support vectors (SV) are obtained from support vector machine (SVM) for LL coefficients. The LL and SVs are fused using arithmetic addition to generate final features. The face database and test face image features are compared using Euclidean Distance (ED) to compute the performance parameters.

    Multimodal Biometrics Enhancement Recognition System based on Fusion of Fingerprint and PalmPrint: A Review

    Get PDF
    This article is an overview of a current multimodal biometrics research based on fingerprint and palm-print. It explains the pervious study for each modal separately and its fusion technique with another biometric modal. The basic biometric system consists of four stages: firstly, the sensor which is used for enrolmen

    Multimodal Biometric Systems for Personal Identification and Authentication using Machine and Deep Learning Classifiers

    Get PDF
    Multimodal biometrics, using machine and deep learning, has recently gained interest over single biometric modalities. This interest stems from the fact that this technique improves recognition and, thus, provides more security. In fact, by combining the abilities of single biometrics, the fusion of two or more biometric modalities creates a robust recognition system that is resistant to the flaws of individual modalities. However, the excellent recognition of multimodal systems depends on multiple factors, such as the fusion scheme, fusion technique, feature extraction techniques, and classification method. In machine learning, existing works generally use different algorithms for feature extraction of modalities, which makes the system more complex. On the other hand, deep learning, with its ability to extract features automatically, has made recognition more efficient and accurate. Studies deploying deep learning algorithms in multimodal biometric systems tried to find a good compromise between the false acceptance and the false rejection rates (FAR and FRR) to choose the threshold in the matching step. This manual choice is not optimal and depends on the expertise of the solution designer, hence the need to automatize this step. From this perspective, the second part of this thesis details an end-to-end CNN algorithm with an automatic matching mechanism. This thesis has conducted two studies on face and iris multimodal biometric recognition. The first study proposes a new feature extraction technique for biometric systems based on machine learning. The iris and facial features extraction is performed using the Discrete Wavelet Transform (DWT) combined with the Singular Value Decomposition (SVD). Merging the relevant characteristics of the two modalities is used to create a pattern for an individual in the dataset. The experimental results show the robustness of our proposed technique and the efficiency when using the same feature extraction technique for both modalities. The proposed method outperformed the state-of-the-art and gave an accuracy of 98.90%. The second study proposes a deep learning approach using DensNet121 and FaceNet for iris and faces multimodal recognition using feature-level fusion and a new automatic matching technique. The proposed automatic matching approach does not use the threshold to ensure a better compromise between performance and FAR and FRR errors. However, it uses a trained multilayer perceptron (MLP) model that allows people’s automatic classification into two classes: recognized and unrecognized. This platform ensures an accurate and fully automatic process of multimodal recognition. The results obtained by the DenseNet121-FaceNet model by adopting feature-level fusion and automatic matching are very satisfactory. The proposed deep learning models give 99.78% of accuracy, and 99.56% of precision, with 0.22% of FRR and without FAR errors. The proposed and developed platform solutions in this thesis were tested and vali- dated in two different case studies, the central pharmacy of Al-Asria Eye Clinic in Dubai and the Abu Dhabi Police General Headquarters (Police GHQ). The solution allows fast identification of the persons authorized to access the different rooms. It thus protects the pharmacy against any medication abuse and the red zone in the military zone against the unauthorized use of weapons

    An Adaptive Threshold based FPGA Implementation for Object and Face detection

    Get PDF
    The moving object and face detection are vital requirement for real time security applications. In this paper, we propose an Adaptive Threshold based FPGA Implementation for Object and Face detection. The input Images and reference Images are preprocessed using Gaussian Filter to smoothen the high frequency components. The 2D-DWT is applied on Gaussian filter outputs and only LL bands are considered for further processing. The modified background with adaptive threshold are used to detect the object with LL band of reference image. The detected object is passed through Gaussian filter to enhance the quality of object. The matching unit is designed to recognize face from standard face database images. It is observed that the performance parameters such as percentage TSR and hardware utilizations are better compared to existing techniques
    • …
    corecore