9 research outputs found

    Face verification system architecture using smart cards

    Full text link
    A smart card based face verification system is pro-posed in which the feature extraction and decision mak-ing is performed on the card. Such an architecture has many privacy and security benefits. As smart cards are limited computational platforms, the face verifica-tion algorithms have to be adapted to limit the facial image representations. This minimises the information needed to be sent to the card and lessens the computa-tional load of the template matching. Studies performed on the BANCA and XM2VTS databases demonstrate that by limiting these representations the verification perfor-mance of the system is not degraded and that the pro-posed architecture is a viable one. 1

    Frame Removal For Mushaf Al-Quran Using Irregular Binary Region

    Get PDF
    Segmentation is a process to remove frame or frame exists in each page of some releases of mushaf Al-Quran. The fault in segmentation process affects the holiness of Al-Quran. The difficulty to identify the appearance of frame around text areas as well as noisy black stripes has caused the segmentation process to be improperly carried out. In this paper, an algorithm for detecting the frame on Al-Quran page without affecting its content is proposed. Firstly, preprocessing was carried out by using the binarisation method. Then, it was followed with the process of detecting the frame in each page. In this stage, the proposed algorithm was applied by calculating the percentage of black pixel of binary from vertical (column) to horizontal (row). The results, based on experiments on several Al-Quran pages from different Al-Quran styles, demonstrate the effectiveness of the proposed techniqu

    Development and Performance Evaluation of Hausdorff Distance Algorithm Based Facial Recognition System

    Get PDF
    Securing access to information is of primary concern in many frame of reference including personal, commercial, governmental and military purpose. Computer verifiable biometric such as face provide an attractive means of securing access to information. Earlier algorithms for facial recognition system which includes Linear Discriminant Analysis (LDA), Principle Component Analysis (PCA) andIndependent Component Analysis (ICA) have yielded unsatisfactory result especially when confronted with unconstrained scenarios such as varying illumination, varying poses, expression and aging. This work presents a facial recognition authentication system using hausdorff distance algorithm in combating the highlighted problems. A system camera was employed for capturing images, information was stored using MYSQL database and biometric templates were stored as binary large object (BLOB). The developed system performance was evaluated using False Reject Rate (FRR), False Accept Rate (FAR), and Receiver Operating Characteristic Curve (ROC graph) as performance metrics. Tests were conducted at various threshold values. FRR errors obtained are 20%, 7%, and 2% at 500 threshold value for one-try, two-try and three-try configuration respectively. The system also presented FAR error of 0% at 500 threshold value for all configurations. As threshold value increases, FAR reduces while FRR increases

    Face Recognition with Multi-stage Matching Algorithms

    Get PDF
    For every face recognition method, the primary goal is to achieve higher recognition accuracy and spend less computational costs. However, as the gallery size increases, especially when one probe image corresponds to only one training image, face recognition becomes more and more challenging. First, a larger gallery size requires more computational costs and memory usage. Meanwhile, that the large gallery sizes degrade the recognition accuracy becomes an even more significant problem to be solved. A coarse parallel algorithm that equally divides training images and probe images into multiple processors is proposed to deal with the large computational costs and huge memory usage of the Non-Graph Matching (NGM) feature-based method. First, each processor finishes its own training workload and stores the extracted feature information, respectively. And then, each processor simultaneously carries out the matching process for their own probe images by communicating their own stored feature information with each other. Finally, one processor collects the recognition result from the other processors. Due to the well-balanced workload, the speedup increases with the number of processors and thus the efficiency is excellently maintained. Moreover, the memory usage on each processor also evidently reduces as the number of processors increases. In sum, the parallel algorithm simultaneously brings less running time and memory usage for one processor. To solve the recognition degradation problem, a set of multi-stage matching algorithms that determine the recognition result step-by-step are proposed. Each step picks a small proportion of the best similar candidates for the next step and removes the others. The behavior of picking and removing repeats until the number of remaining candidates is small enough to produce the final recognition result. Three multi-stage matching algorithms— n-ary elimination, divide and conquer, and two-stage hybrid— are introduced to the matching process of traditional face recognition methods, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Non-graph Matching (NGM). N-ary elimination accomplishes the multi-stage matching from the global perspective by ranking the similarities and picking the best candidates. Divide and conquer implements the multi-stage matching from the local perspective by dividing the candidates into groups and selecting the best one of each group. For two-stage hybrid, it uses a holistic method to choose a small amount of candidates and then utilizes a feature-based method to find out the final recognition result from them. From the experimental results, three conclusions can be drawn. First, with the multi-stage matching algorithms, higher recognition accuracy can be achieved. Second, the larger the gallery size, the greater the improved accuracy brought by the multi-stage matching algorithms. Finally, the multi-stage matching algorithms achieve little extra computational costs

    Multimodal biometrics scheme based on discretized eigen feature fusion for identical twins identification

    Get PDF
    The subject of twins multimodal biometrics identification (TMBI) has consistently been an interesting and also a valuable area of study. Considering high dependency and acceptance, TMBI greatly contributes to the domain of twins identification in biometrics traits. The variation of features resulting from the process of multimodal biometrics feature extraction determines the distinctive characteristics possessed by a twin. However, these features are deemed as inessential as they cause the increase in the search space size and also the difficulty in the generalization process. In this regard, the key challenge is to single out features that are deemed most salient with the ability to accurately recognize the twins using multimodal biometrics. In identification of twins, effective designs of methodology and fusion process are important in assuring its success. These processes could be used in the management and integration of vital information including highly selective biometrics characteristic possessed by any of the twins. In the multimodal biometrics twins identification domain, exemplification of the best features from multiple traits of twins and biometrics fusion process remain to be completely resolved. This research attempts to design a new scheme and more effective multimodal biometrics twins identification by introducing the Dis-Eigen feature-based fusion with the capacity in generating a uni-representation and distinctive features of numerous modalities of twins. First, Aspect United Moment Invariant (AUMI) was used as global feature in the extraction of features obtained from the twins handwritingfingerprint shape and style. Then, the feature-based fusion was examined in terms of its generalization. Next, to achieve better classification accuracy, the Dis-Eigen feature-based fusion algorithm was used. A total of eight distinctive classifiers were used in executing four different training and testing of environment settings. Accordingly, the most salient features of Dis-Eigen feature-based fusion were trained and tested to determine the accuracy of the classification, particularly in terms of performance. The results show that the identification of twins improved as the error of similarity for intra-class decreased while at the same time, the error of similarity for inter-class increased. Hence, with the application of diverse classifiers, the identification rate was improved reaching more than 93%. It can be concluded from the experimental outcomes that the proposed method using Receiver Operation Characteristics (ROC) considerably increases the twins handwriting-fingerprint identification process with 90.25% rate of identification when False Acceptance Rate (FAR) is at 0.01%. It is also indicated that 93.15% identification rate is achieved when FAR is at 0.5% and 98.69% when FAR is at 1.00%. The new proposed solution gives a promising alternative to twins identification application

    A study of eigenvector based face verification in static images

    Get PDF
    As one of the most successful application of image analysis and understanding, face recognition has recently received significant attention, especially during the past few years. There are at least two reasons for this trend the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research. The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. Although very reliable methods of biometric personal identification exist, for example, fingerprint analysis and retinal or iris scans, these methods depend on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participant’s cooperation or knowledge. The three categories of face recognition are face detection, face identification and face verification. Face Detection means extract the face from total image of the person. Face identification means the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals. Face verification means the system needs to confirm or reject the claimed identity of the input. My thesis was face verification in static images. Here a static image means the images which are not in motion. The eigenvectors based face verification algorithm gave the results on face verification in static images based upon the eigenvectors and neural network backpropagation algorithm. Eigen vectors are used for give the geometrical information about the faces. First we take 10 images for each person in same angle with different expressions and apply principle component analysis. Here we consider image dimension as 48 x48 then we get 48 eigenvalues. Out of 48 eigenvalues we consider only 10 highest eigenvaues corresponding eigenvectors. These eigenvectors are given as input to the neural network for training. Here we used backpropagation algorithm for training the neural network. After completion of training we give an image which is in different angle for testing purpose. Here we check the verification rate (the rate at which legitimate users is granted access) and false acceptance rate (the rate at which imposters are granted access). Here neural network take more time for training purpose. The proposed algorithm gives the results on face verification in static images based upon the eigenvectors and neural network modified backpropagation algorithm. In modified backpropagation algorithm momentum term is added for decrease the training time. Here for using the modified backpropagation algorithm verification rate also slightly increased and false acceptance rate also slightly decreased

    Feature extraction and fusion techniques for patch-based face recognition

    Get PDF
    Face recognition is one of the most addressed pattern recognition problems in recent studies due to its importance in security applications and human computer interfaces. After decades of research in the face recognition problem, feasible technologies are becoming available. However, there is still room for improvement for challenging cases. As such, face recognition problem still attracts researchers from image processing, pattern recognition and computer vision disciplines. Although there exists other types of personal identification such as fingerprint recognition and retinal/iris scans, all these methods require the collaboration of the subject. However, face recognition differs from these systems as facial information can be acquired without collaboration or knowledge of the subject of interest. Feature extraction is a crucial issue in face recognition problem and the performance of the face recognition systems depend on the reliability of the features extracted. Previously, several dimensionality reduction methods were proposed for feature extraction in the face recognition problem. In this thesis, in addition to dimensionality reduction methods used previously for face recognition problem, we have implemented recently proposed dimensionality reduction methods on a patch-based face recognition system. Patch-based face recognition is a recent method which uses the idea of analyzing face images locally instead of using global representation, in order to reduce the effects of illumination changes and partial occlusions. Feature fusion and decision fusion are two distinct ways to make use of the extracted local features. Apart from the well-known decision fusion methods, a novel approach for calculating weights for the weighted sum rule is proposed in this thesis. On two separate databases, we have conducted both feature fusion and decision fusion experiments and presented recognition accuracies for different dimensionality reduction and normalization methods. Improvements in recognition accuracies are shown and superiority of decision fusion over feature fusion is advocated. Especially in the more challenging AR database, we obtain significantly better results using decision fusion as compared to conventional methods and feature fusion methods
    corecore