1,402 research outputs found

    Accurate Pupil Features Extraction Based on New Projection Function

    Get PDF
    Accurate pupil features extraction is a key step for iris recognition. In this paper, we propose a new algorithm to extract pupil features precisely within gray level iris images. The angular integral projection function (AIPF) is developed as a general function to perform integral projection along angular directions, both the well known vertical and horizontal integral projection functions can be viewed as special cases of AIPF. Another implementation for AIPF based on localized Radon transform is also presented. First, the approximate position of pupil center is detected. Then, a set of pupil's radial boundary points are detected using AIPF. Finally, a circle to the detected boundary points is fitted. Experimental results on 2655 iris images from CASIA V3.0 show high accuracy with rapid execution time

    Gender and gaze gesture recognition for human-computer interaction

    Get PDF
    © 2016 Elsevier Inc. The identification of visual cues in facial images has been widely explored in the broad area of computer vision. However theoretical analyses are often not transformed into widespread assistive Human-Computer Interaction (HCI) systems, due to factors such as inconsistent robustness, low efficiency, large computational expense or strong dependence on complex hardware. We present a novel gender recognition algorithm, a modular eye centre localisation approach and a gaze gesture recognition method, aiming to escalate the intelligence, adaptability and interactivity of HCI systems by combining demographic data (gender) and behavioural data (gaze) to enable development of a range of real-world assistive-technology applications. The gender recognition algorithm utilises Fisher Vectors as facial features which are encoded from low-level local features in facial images. We experimented with four types of low-level features: greyscale values, Local Binary Patterns (LBP), LBP histograms and Scale Invariant Feature Transform (SIFT). The corresponding Fisher Vectors were classified using a linear Support Vector Machine. The algorithm has been tested on the FERET database, the LFW database and the FRGCv2 database, yielding 97.7%, 92.5% and 96.7% accuracy respectively. The eye centre localisation algorithm has a modular approach, following a coarse-to-fine, global-to-regional scheme and utilising isophote and gradient features. A Selective Oriented Gradient filter has been specifically designed to detect and remove strong gradients from eyebrows, eye corners and self-shadows (which sabotage most eye centre localisation methods). The trajectories of the eye centres are then defined as gaze gestures for active HCI. The eye centre localisation algorithm has been compared with 10 other state-of-the-art algorithms with similar functionality and has outperformed them in terms of accuracy while maintaining excellent real-time performance. The above methods have been employed for development of a data recovery system that can be employed for implementation of advanced assistive technology tools. The high accuracy, reliability and real-time performance achieved for attention monitoring, gaze gesture control and recovery of demographic data, can enable the advanced human-robot interaction that is needed for developing systems that can provide assistance with everyday actions, thereby improving the quality of life for the elderly and/or disabled

    Pupil Center Detection Approaches: A comparative analysis

    Full text link
    In the last decade, the development of technologies and tools for eye tracking has been a constantly growing area. Detecting the center of the pupil, using image processing techniques, has been an essential step in this process. A large number of techniques have been proposed for pupil center detection using both traditional image processing and machine learning-based methods. Despite the large number of methods proposed, no comparative work on their performance was found, using the same images and performance metrics. In this work, we aim at comparing four of the most frequently cited traditional methods for pupil center detection in terms of accuracy, robustness, and computational cost. These methods are based on the circular Hough transform, ellipse fitting, Daugman's integro-differential operator and radial symmetry transform. The comparative analysis was performed with 800 infrared images from the CASIA-IrisV3 and CASIA-IrisV4 databases containing various types of disturbances. The best performance was obtained by the method based on the radial symmetry transform with an accuracy and average robustness higher than 94%. The shortest processing time, obtained with the ellipse fitting method, was 0.06 s.Comment: 15 pages, 9 figures, submitted to the journal "Computaci\'on y Sistemas

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans

    Palmprint Gender Classification Using Deep Learning Methods

    Get PDF
    Gender identification is an important technique that can improve the performance of authentication systems by reducing searching space and speeding up the matching process. Several biometric traits have been used to ascertain human gender. Among them, the human palmprint possesses several discriminating features such as principal-lines, wrinkles, ridges, and minutiae features and that offer cues for gender identification. The goal of this work is to develop novel deep-learning techniques to determine gender from palmprint images. PolyU and CASIA palmprint databases with 90,000 and 5502 images respectively were used for training and testing purposes in this research. After ROI extraction and data augmentation were performed, various convolutional and deep learning-based classification approaches were empirically designed, optimized, and tested. Results of gender classification as high as 94.87% were achieved on the PolyU palmprint database and 90.70% accuracy on the CASIA palmprint database. Optimal performance was achieved by combining two different pre-trained and fine-tuned deep CNNs (VGGNet and DenseNet) through score level average fusion. In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) was also implemented to ascertain which specific regions of the palmprint are most discriminative for gender classification

    Eye Detection using Helmholtz Principle

    Get PDF
                كشف العين استخدم في تطبيقات متعددة مثل تمييز الأنماط , البصمة, وأنظمة المراقبة والعديد من الأنظمة الأخرى. في هذه المقالة ,تم تقديم  طريقة جديدة لتحديد العين واستخلاص الشكل الخارجي لعين واحدة من الصورة بالأعتماد على مبدئين هما Helmholtz و Gestalt. وفقا لميدأ الأدراك ل Helmholtz  أنه أي شكل هندسي ملاحظ يكون ذو معنى أدراكيا أذا كان عدد مرات تكراره ضئيل جدا في صورة ذات توزيع عشوائي. لتحقيق هذا الهدف مبدأ Gestalt الذى ينص على أن الأنسان يلاحظ الأشياء أما عن طريق تجميع عناصره المتماثلة أو تمييز الأنماط . بصورة عامة وفقا  لمبدأ Gestalt ان الانسان يدرك الأشياء من خلال الوصف العام لهذه الاشياء . في هذه المقالة تم الأستفادة من هذين المبدئين لتمييز وأستخلاص جزء العين من الصورة. اللغة البرمجية جافا مع مكتبة opencv المتخصصة في معالجة الصور تم استخدامهما معا لهذا الغرض. نتائج جيدة تم الحصول عليها من هذه الطريقة المقترحة , حيث تم الحصول على 88.89% كنسبة الدقة أما بالنسبة لمعدل وقت التنفيذ يبلغ 0.23 من الثواني.            Eye Detection is used in many applications like pattern recognition, biometric, surveillance system and many other systems. In this paper, a new method is presented to detect and extract the overall shape of one eye from image depending on two principles Helmholtz & Gestalt. According to the principle of perception by Helmholz, any observed geometric shape is perceptually "meaningful" if its repetition number is very small in image with random distribution. To achieve this goal, Gestalt Principle states that humans see things either through grouping its similar elements or recognize patterns. In general, according to Gestalt Principle, humans see things through general description of these things. This paper utilizes these two principles to recognize and extract eye part from image. Java programming language and OpenCV library for image processing are used for this purpose. Good results are obtained from this proposed method, where 88.89% was obtained as a detection rate taking into account that the average execution time is about 0.23 in seconds
    corecore