6 research outputs found
Iris recognition based on 2D Gabor filter
Iris recognition is a type of biometrics technology that is based on physiological features of the human body. The objective of this research is to recognize and identify iris among many irises that are stored in a visual database. This study employed a left and right iris biometric framework for inclusion decision processing by combining image processing and artificial bee colony. The proposed approach was evaluated on a visual database of 280 colored iris pictures. The database was then divided into 28 clusters. Images were preprocessed and texture features were extracted based Gabor filters to capture both local and global details within an iris. The technique begins by comparing the attributes of the online-obtained iris picture with those of the visual database. This technique either generates a reject or approve message. The consequences of the intended work reflect the output’s accuracy and integrity. This is due to the careful selection of attributes, as well as the deployment of an artificial bee colony and data clustering, which decreased complexity and eventually increased identification rate to 100%. We demonstrate that the proposed method achieves state-of-the-art performance and that our recommended procedures outperform existing iris recognition systems
The study of the applications of biometrics systems: a literature review
Biometric systems utilize individual unique identification to verify specific characteristics of an individual to grant access to a system. The unique biometric identification makes duplication or alteration of information almost impossible. This has encouraged the acceptance of biometric technology and enabled the technology to evolve exponentially. Besides the benefits of security features promoted by the biometric system, reciprocally, biometric systems also have limitations that can cause problems. This paper reports on reviews conducted on articles with the aim to identify different types of biometric systems, the application domains, constraints, and limitations of existing biometric systems
Black hole algorithm along edge detector and circular hough transform based iris projection with biometric identification systems
The circular parameters between the pupil and the iris are found using current iris identification techniques but the accuracy creates an issue for the detection process during image processing. The procedure of extracting the iris region from an eye image using circular parameters can be improved via approximately too many approaches in literature but remain some portions under slightly unconstrained circumstances. In this study, we presented a Black Hole Algorithm (BHA) along the Canny edge detector and circular Hough transform-based optimization technique for circular parameter identification of iris segmentation. The iris boundary is discovered using the suggested segmentation approach and a computational model of the pixel value. The BHA looks for the central radius of the iris and pupil. The system uses MATLAB to test the CASIA-V3 database. The segmented images exhibit 98.71% accuracy. For all future access control applications, the segmentation-based BHA is effective at identifying the iris. The integration of the BHA with the Hough transforms and Canny edge detector is the main method by which the iris segmentation is accomplished. This novel technique improves the accuracy and effectiveness of iris segmentation, with potential uses in image analysis and biometric identification
Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks
Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General
signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process
Biometric iris recognition using radial basis function neural network
The consistent and efficient method for the identification of biometrics is the iris recognition in view of the fact that it has richness in texture information. A good number of features performed in the past are built on handcrafted features. The proposed method is based on the feed-forward architecture and uses k-means clustering algorithm for the iris patterns classification. In this paper, segmentation of iris is performed using the circular Hough transform that realizes the iris boundaries in the eye and isolates the region of iris with no eyelashes and other constrictions. Moreover, Daugman's rubber sheet model is used to transform the resultant iris portion into polar coordinates in the process of normalization. A unique iris code is generated by log-Gabor filter to extract the features. The classification is achieved using neural network structures, the feed-forward neural network and the radial basis function neural network. The experiments have been conducted using the Chinese Academy of Sciences Institute of Automation (CASIA) iris database. The proposed system decreases computation time, size of the database and increases the recognition accuracy as compared to the existing algorithms
Recommended from our members
An Investigation into the Performance of Ethnicity Verification Between Humans and Machine Learning Algorithms
There has been a significant increase in the interest for the task of classifying
demographic profiles i.e. race and ethnicity. Ethnicity is a significant human
characteristic and applying facial image data for the discrimination of ethnicity is
integral to face-related biometric systems. Given the diversity in the application
of ethnicity-specific information such as face recognition and iris recognition, and
the availability of image datasets for more commonly available human
populations, i.e. Caucasian, African-American, Asians, and South-Asian Indians.
A gap has been identified for the development of a system which analyses the
full-face and its individual feature-components (eyes, nose and mouth), for the
Pakistani ethnic group. An efficient system is proposed for the verification of the
Pakistani ethnicity, which incorporates a two-tier (computer vs human) approach.
Firstly, hand-crafted features were used to ascertain the descriptive nature of a
frontal-image and facial profile, for the Pakistani ethnicity. A total of 26 facial
landmarks were selected (16 frontal and 10 for the profile) and by incorporating
2 models for redundant information removal, and a linear classifier for the binary
task. The experimental results concluded that the facial profile image of a
Pakistani face is distinct amongst other ethnicities. However, the methodology
consisted of limitations for example, low performance accuracy, the laborious
nature of manual data i.e. facial landmark, annotation, and the small facial image
dataset. To make the system more accurate and robust, Deep Learning models
are employed for ethnicity classification. Various state-of-the-art Deep models
are trained on a range of facial image conditions, i.e. full face and partial-face
images, plus standalone feature components such as the nose and mouth. Since
ethnicity is pertinent to the research, a novel facial image database entitled
Pakistani Face Database (PFDB), was created using a criterion-specific selection
process, to ensure assurance in each of the assigned class-memberships, i.e.
Pakistani and Non-Pakistani. Comparative analysis between 6 Deep Learning
models was carried out on augmented image datasets, and the analysis
demonstrates that Deep Learning yields better performance accuracy compared
to low-level features. The human phase of the ethnicity classification framework
tested the discrimination ability of novice Pakistani and Non-Pakistani
participants, using a computerised ethnicity task. The results suggest that
humans are better at discriminating between Pakistani and Non-Pakistani full
face images, relative to individual face-feature components (eyes, nose, mouth),
struggling the most with the nose, when making judgements of ethnicity. To
understand the effects of display conditions on ethnicity discrimination accuracy, two conditions were tested; (i) Two-Alternative Forced Choice (2-AFC) and (ii)
Single image procedure. The results concluded that participants perform
significantly better in trials where the target (Pakistani) image is shown alongside
a distractor (Non-Pakistani) image. To conclude the proposed framework,
directions for future study are suggested to advance the current understanding of
image based ethnicity verification.Acumé Forensi