228 research outputs found

    Noisy iris segmentation with boundary regularization and reflections removal

    Get PDF
    The paper presents an innovative algorithm for the segmentation of the iris in noisy images, with boundaries regularization and the removal of the possible existing reflections. In particular, the method aims to extract the iris pattern from the eye image acquired at the visible wavelength, in an uncontrolled environment where reflections and occlusions can also be present, on-the-move and at variable distance. The method achieves the iris segmentation by the following three main steps. The first step locates the centers of the pupil and the iris in the input image. Then two image strips containing the iris boundaries are extracted and linearizated. The last step locates the iris boundary points in the strips and it performs a regularization operation by achieving the exclusion of the outliers and the interpolation of missing points. The obtained curves are then converted into the original image space in order to produce a first segmentation output. Occlusions such as reflections and eyelashes are then identified and removed from the final area of the segmentation. Results indicate that the presented approach is effective and suitable to deal with the iris acquisition in noisy environments. The proposed algorithm ranked seventh in the international Noisy Iris Challenge Evaluation (NICE.I)

    Seeing the World through Your Eyes

    Full text link
    The reflective nature of the human eye is an underappreciated source of information about what the world around us looks like. By imaging the eyes of a moving person, we can collect multiple views of a scene outside the camera's direct line of sight through the reflections in the eyes. In this paper, we reconstruct a 3D scene beyond the camera's line of sight using portrait images containing eye reflections. This task is challenging due to 1) the difficulty of accurately estimating eye poses and 2) the entangled appearance of the eye iris and the scene reflections. Our method jointly refines the cornea poses, the radiance field depicting the scene, and the observer's eye iris texture. We further propose a simple regularization prior on the iris texture pattern to improve reconstruction quality. Through various experiments on synthetic and real-world captures featuring people with varied eye colors, we demonstrate the feasibility of our approach to recover 3D scenes using eye reflections.Comment: CVPR 2024. First two authors contributed equally. Project page: https://world-from-eyes.github.io

    Graph-based skin lesion segmentation of multispectral dermoscopic images

    No full text
    International audienceAccurate skin lesion segmentation is critical for automated early skin cancer detection and diagnosis. We present a novel method to detect skin lesion borders in multispectral der-moscopy images. First, hairs are detected on infrared images and removed by inpainting visible spectrum images. Second, skin lesion is pre-segmented using a clustering of a superpixel partition. Finally, the pre-segmentation is globally regular-ized at the superpixel level and locally regularized in a narrow band at the pixel level

    Fusion Iris and Periocular Recognitions in Non-Cooperative Environment

    Get PDF
    The performance of iris recognition in non-cooperative environment can be negatively impacted when the resolution of the iris images is low which results in failure to determine the eye center, limbic and pupillary boundary of the iris segmentation. Hence, a combination with periocular features is suggested to increase the authenticity of the recognition system. However, the texture feature of periocular can be easily affected by a background complication while the colour feature of periocular is still limited to spatial information and quantization effects. This happens due to different distances between the sensor and the subject during the iris acquisition stage as well as image size and orientation. The proposed method of periocular feature extraction consists of a combination of rotation invariant uniform local binary pattern to select the texture features and a method of color moment to select the color features. Besides, a hue-saturation-value channel is selected to avoid loss of discriminative information in the eye image. The proposed method which consists of combination between texture and colour features provides the highest accuracy for the periocular recognition with more than 71.5% for the UBIRIS.v2 dataset and 85.7% for the UBIPr dataset. For the fusion recognitions, the proposed method achieved the highest accuracy with more than 85.9% for the UBIRIS.v2 dataset and 89.7% for the UBIPr dataset

    Accurate Detection of Non-Iris Occlusions

    Get PDF
    Abstract-Accurate detection of iris eyelids and reflections is the prerequisite for the accurate iris recognition, both in near-infrared or visible spectrum measurements. Undected iris occlusions otherwise dramatically decrease the iris recognition rate. This paper presents a fast multispectral iris occlusions detection method based on the underlying multispectral spatial probabilistic iris textural model and adaptive thresholding. The model adaptively learns its parameters on the iris texture part and subsequently checks for iris reflections, eyelashes, and eyelids using the recursive prediction analysis. Our method obtains better accuracy with respect to the previously performed Noisy Iris Challenge Evaluation contest. It ranked first from the 97+2 alternative methods on this large colour iris database

    Motion Segmentation from Clustering of Sparse Point Features Using Spatially Constrained Mixture Models

    Get PDF
    Motion is one of the strongest cues available for segmentation. While motion segmentation finds wide ranging applications in object detection, tracking, surveillance, robotics, image and video compression, scene reconstruction, video editing, and so on, it faces various challenges such as accurate motion recovery from noisy data, varying complexity of the models required to describe the computed image motion, the dynamic nature of the scene that may include a large number of independently moving objects undergoing occlusions, and the need to make high-level decisions while dealing with long image sequences. Keeping the sparse point features as the pivotal point, this thesis presents three distinct approaches that address some of the above mentioned motion segmentation challenges. The first part deals with the detection and tracking of sparse point features in image sequences. A framework is proposed where point features can be tracked jointly. Traditionally, sparse features have been tracked independently of one another. Combining the ideas from Lucas-Kanade and Horn-Schunck, this thesis presents a technique in which the estimated motion of a feature is influenced by the motion of the neighboring features. The joint feature tracking algorithm leads to an improved tracking performance over the standard Lucas-Kanade based tracking approach, especially while tracking features in untextured regions. The second part is related to motion segmentation using sparse point feature trajectories. The approach utilizes a spatially constrained mixture model framework and a greedy EM algorithm to group point features. In contrast to previous work, the algorithm is incremental in nature and allows for an arbitrary number of objects traveling at different relative speeds to be segmented, thus eliminating the need for an explicit initialization of the number of groups. The primary parameter used by the algorithm is the amount of evidence that must be accumulated before the features are grouped. A statistical goodness-of-fit test monitors the change in the motion parameters of a group over time in order to automatically update the reference frame. The approach works in real time and is able to segment various challenging sequences captured from still and moving cameras that contain multiple independently moving objects and motion blur. The third part of this thesis deals with the use of specialized models for motion segmentation. The articulated human motion is chosen as a representative example that requires a complex model to be accurately described. A motion-based approach for segmentation, tracking, and pose estimation of articulated bodies is presented. The human body is represented using the trajectories of a number of sparse points. A novel motion descriptor encodes the spatial relationships of the motion vectors representing various parts of the person and can discriminate between articulated and non-articulated motions, as well as between various pose and view angles. Furthermore, a nearest neighbor search for the closest motion descriptor from the labeled training data consisting of the human gait cycle in multiple views is performed, and this distance is fed to a Hidden Markov Model defined over multiple poses and viewpoints to obtain temporally consistent pose estimates. Experimental results on various sequences of walking subjects with multiple viewpoints and scale demonstrate the effectiveness of the approach. In particular, the purely motion based approach is able to track people in night-time sequences, even when the appearance based cues are not available. Finally, an application of image segmentation is presented in the context of iris segmentation. Iris is a widely used biometric for recognition and is known to be highly accurate if the segmentation of the iris region is near perfect. Non-ideal situations arise when the iris undergoes occlusion by eyelashes or eyelids, or the overall quality of the segmented iris is affected by illumination changes, or due to out-of-plane rotation of the eye. The proposed iris segmentation approach combines the appearance and the geometry of the eye to segment iris regions from non-ideal images. The image is modeled as a Markov random field, and a graph cuts based energy minimization algorithm is applied to label the pixels either as eyelashes, pupil, iris, or background using texture and image intensity information. The iris shape is modeled as an ellipse and is used to refine the pixel based segmentation. The results indicate the effectiveness of the segmentation algorithm in handling non-ideal iris images

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    Techniques for Ocular Biometric Recognition Under Non-ideal Conditions

    Get PDF
    The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments
    • …
    corecore