8 research outputs found

    A novel method for low-constrained iris boundary localization

    Get PDF
    Iris recognition systems are strongly dependent on their segmentation processes, which have traditionally assumed rigid experimental constraints to achieve good performance, but now move towards less constrained environments. This work presents a novel method on iris segmentation that covers the localization of the pupillary and limbic iris boundaries. The method consists of an energy minimization procedure posed as a multilabel one-directional graph, followed by a model fitting process and the use of physiological priors. Accurate segmentations are achieved even in the presence of lutter, lenses, glasses, motion blur,and variable illumination. The contributions of this paper are a fast and reliable method for the accurate localizationof the iris boundaries in low-constrained conditions, and a novel database for iris segmentation incorporating challenging iris images, which has been publicly released to the research community. The proposed method has been evaluated over three different databases, showing higher performance in comparison to traditional techniques.Peer ReviewedPreprin

    Robust Face Localization Using Dynamic Time Warping Algorithm

    Get PDF

    Eye detection in video images with complex Background

    Get PDF
    Detection of human eye is a significant but difficult task. This paper presents an efficient eye detection approach for video images with complex background. The propose method has two main phases to find eye pair such as locating face and eye region and finding eye. In the first phase the novel approach to fast locating the face and eye region is developed. In the second phase eye finding directed by knowledge is introduced in detail. Both phases developed using Mat lab 7.5. The proposed method is robust against moderated rotations, clustered background, partial face occlusion and glass wearing. We prove the efficiency of our proposed method in detection of eyes complex background i.e. both indoor and outdoor environmen

    Innovative local texture descriptors with application to eye detection

    Get PDF
    Local Binary Patterns (LBP), which is one of the well-known texture descriptors, has broad applications in pattern recognition and computer vision. The attractive properties of LBP are its tolerance to illumination variations and its computational simplicity. However, LBP only compares a pixel with those in its own neighborhood and encodes little information about the relationship of the local texture with the features. This dissertation introduces a new Feature Local Binary Patterns (FLBP) texture descriptor that can compare a pixel with those in its own neighborhood as well as in other neighborhoods and encodes the information of both local texture and features. The features encoded in FLBP are broadly defined, such as edges, Gabor wavelet features, and color features. Specifically, a binary image is first derived by extracting feature pixels from a given image, and then a distance vector field is obtained by computing the distance vector between each pixel and its nearest feature pixel defined in the binary image. Based on the distance vector field and the FLBP parameters, the FLBP representation of the given image is derived. The feasibility of the proposed FLBP is demonstrated on eye detection using the BioID and the FERET databases. Experimental results show that the FLBP method significantly improves upon the LBP method in terms of both the eye detection rate and the eye center localization accuracy. As LBP is sensitive to noise especially in near-uniform image regions, Local Ternary Patterns (LTP) was proposed to address this problem by extending LBP to three-valued codes. However, further research reveals that both LTP and LBP achieve similar results for face and facial expression recognition, while LTP has a higher computational cost than LBP. To improve upon LTP, this dissertation introduces another new local texture descriptor: Local Quaternary Patterns (LQP) and its extension, Feature Local Quaternary Patterns (FLQP). LQP encodes four relationships of local texture, and therefore, it includes more information of local texture than the LBP and the LTP. FLQP, which encodes both local and feature information, is expected to perform even better than LQP for texture description and pattern analysis. The LQP and FLQP are applied to eye detection on the BioID database. Experimental results show that both FLQP and LQP achieve better eye detection performance than FLTP, LTP, FLBP and LBP. The FLQP method achieves the highest eye detection rate

    Sonar sensor interpretation for ectogeneous robots

    Get PDF
    We have developed four generations of sonar scanning systems to automatically interpret surrounding environment. The first two are stationary 3D air-coupled ultrasound scanning systems and the last two are packaged as sensor heads for mobile robots. Template matching analysis is applied to distinguish simple indoor objects. It is conducted by comparing the tested echo with the reference echoes. Important features are then extracted and drawn in the phase plane. The computer then analyzes them and gives the best choices of the tested echoes automatically. For cylindrical objects outside, an algorithm has been presented to distinguish trees from smooth circular poles based on analysis of backscattered sonar echoes. The echo data is acquired by a mobile robot which has a 3D air-coupled ultrasound scanning system packaged as the sensor head. Four major steps are conducted. The final Average Asymmetry-Average Squared Euclidean Distance phase plane is segmented to tell a tree from a pole by the location of the data points for the objects interested. For extended objects outside, we successfully distinguished seven objects in the campus by taking a sequence scans along each object, obtaining the corresponding backscatter vs. scan angle plots, forming deformable template matching, extracting interesting feature vectors and then categorizing them in a hyper-plane. We have also successfully taught the robot to distinguish three pairs of objects outside. Multiple scans are conducted at different distances. A two-step feature extraction is conducted based on the amplitude vs. scan angle plots. The final Slope1 vs. Slope2 phase plane not only separates the rectangular objects from the corresponding cylindrical

    A framework for context-aware driver status assessment systems

    Get PDF
    The automotive industry is actively supporting research and innovation to meet manufacturers' requirements related to safety issues, performance and environment. The Green ITS project is among the efforts in that regard. Safety is a major customer and manufacturer concern. Therefore, much effort have been directed to developing cutting-edge technologies able to assess driver status in term of alertness and suitability. In that regard, we aim to create with this thesis a framework for a context-aware driver status assessment system. Context-aware means that the machine uses background information about the driver and environmental conditions to better ascertain and understand driver status. The system also relies on multiple sensors, mainly video and audio. Using context and multi-sensor data, we need to perform multi-modal analysis and data fusion in order to infer as much knowledge as possible about the driver. Last, the project is to be continued by other students, so the system should be modular and well-documented. With this in mind, a driving simulator integrating multiple sensors was built. This simulator is a starting point for experimentation related to driver status assessment, and a prototype of software for real-time driver status assessment is integrated to the platform. To make the system context-aware, we designed a driver identification module based on audio-visual data fusion. Thus, at the beginning of driving sessions, the users are identified and background knowledge about them is loaded to better understand and analyze their behavior. A driver status assessment system was then constructed based on two different modules. The first one is for driver fatigue detection, based on an infrared camera. Fatigue is inferred via percentage of eye closure, which is the best indicator of fatigue for vision systems. The second one is a driver distraction recognition system, based on a Kinect sensor. Using body, head, and facial expressions, a fusion strategy is employed to deduce the type of distraction a driver is subject to. Of course, fatigue and distraction are only a fraction of all possible drivers' states, but these two aspects have been studied here primarily because of their dramatic impact on traffic safety. Through experimental results, we show that our system is efficient for driver identification and driver inattention detection tasks. Nevertheless, it is also very modular and could be further complemented by driver status analysis, context or additional sensor acquisition

    www.elsevier.com/locate/patcog Iris detection using intensity and edge information

    No full text
    In this paper we propose a new algorithm to detect the irises of both eyes from a face image. The algorithm rst detects the face region in the image and then extracts intensity valleys from the face region. Next, the algorithm extracts iris candidates from the valleys using the feature template of Lin and Wu (IEEE Trans. Image Process. 8 (6) (1999) 834) and the separability lter of Fukui and Yamaguchi (Trans. IEICE Japan J80-D-II (8) (1997) 2170). Finally, using the costs for pairs of iris candidates proposed in this paper, the algorithm selects a pair of iris candidates corresponding to the irises. The costs are computed by using Hough transform, separability lter and template matching. As the results of the experiments, the iris detection rate of the proposed algorithm was 95.3 % for 150 face images of 15 persons without spectacles in the database of University of Bern and 96.8 % for 63 images of 21 persons without spectacles in the AR database.? 2002 Pattern Recognition Society
    corecore