15,754 research outputs found

    Human eye localization using edge projections

    Get PDF
    in this paper, a human eye localization algorithm in images and video is presented for faces with frontal pose and upright orientation. A given face region is filtered by a high-pass filter of a wavelet transform. In this way, edges of the region are highlighted, and a caricature-like representation is obtained. After analyzing horizontal projections and profiles of edge regions in the high-pass filtered image, the candidate points for each eye are detected. All the candidate points are then classified using a support vector machine based classifier. Locations of each eye are estimated according to the most probable ones among the candidate points. It is experimentally observed that our eye localization method provides promising results for both image and video processing applications

    Edge projections for eye localization

    Get PDF
    An algorithm for human-eye localization in images is presented for faces with frontal pose and upright orientation. A given face region is filtered by a highpass wavelet-transform filter. In this way, edges of the region are highlighted, and a caricature-like representation is obtained. Candidate points for each eye are detected after analyzing horizontal projections and profiles of edge regions in the highpass-filtered image. All the candidate points are then classified using a support vector machine. Locations of each eye are estimated according to the most probable ones among the candidate points. It is experimentally observed that our eye localization method provides promising results for image-processing applications. © 2008 Society of Photo-Optical Instrumentation Engineers

    Circle-based Eye Center Localization (CECL)

    Full text link
    We propose an improved eye center localization method based on the Hough transform, called Circle-based Eye Center Localization (CECL) that is simple, robust, and achieves accuracy on a par with typically more complex state-of-the-art methods. The CECL method relies on color and shape cues that distinguish the iris from other facial structures. The accuracy of the CECL method is demonstrated through a comparison with 15 state-of-the-art eye center localization methods against five error thresholds, as reported in the literature. The CECL method achieved an accuracy of 80.8% to 99.4% and ranked first for 2 of the 5 thresholds. It is concluded that the CECL method offers an attractive alternative to existing methods for automatic eye center localization.Comment: Published and presented at The 14th IAPR International Conference on Machine Vision Applications, 2015. http://www.mva-org.jp/mva2015

    Contributions of MyD88-dependent receptors and CD11c-positive cells to corneal epithelial barrier function against Pseudomonas aeruginosa.

    Get PDF
    Previously we reported that corneal epithelial barrier function against Pseudomonas aeruginosa was MyD88-dependent. Here, we explored contributions of MyD88-dependent receptors using vital mouse eyes and confocal imaging. Uninjured IL-1R (-/-) or TLR4 (-/-) corneas, but not TLR2 (-/-), TLR5 (-/-), TLR7 (-/-), or TLR9 (-/-), were more susceptible to P. aeruginosa adhesion than wild-type (3.8-fold, 3.6-fold respectively). Bacteria adherent to the corneas of IL-1R (-/-) or TLR5 (-/-) mice penetrated beyond the epithelial surface only if the cornea was superficially-injured. Bone marrow chimeras showed that bone marrow-derived cells contributed to IL-1R-dependent barrier function. In vivo, but not ex vivo, stromal CD11c+ cells responded to bacterial challenge even when corneas were uninjured. These cells extended processes toward the epithelial surface, and co-localized with adherent bacteria in superficially-injured corneas. While CD11c+ cell depletion reduced IL-6, IL-1ÎČ, CXCL1, CXCL2 and CXCL10 transcriptional responses to bacteria, and increased susceptibility to bacterial adhesion (>3-fold), the epithelium remained resistant to bacterial penetration. IL-1R (-/-) corneas also showed down-regulation of IL-6 and CXCL1 genes with and without bacterial challenge. These data show complex roles for TLR4, TLR5, IL-1R and CD11c+ cells in constitutive epithelial barrier function against P. aeruginosa, with details dependent upon in vivo conditions

    "Sitting too close to the screen can be bad for your ears": A study of audio-visual location discrepancy detection under different visual projections

    Get PDF
    In this work, we look at the perception of event locality under conditions of disparate audio and visual cues. We address an aspect of the so called “ventriloquism effect” relevant for multi-media designers; namely, how auditory perception of event locality is influenced by the size and scale of the accompanying visual projection of those events. We observed that recalibration of the visual axes of an audio-visual animation (by resizing and zooming) exerts a recalibrating influence on the auditory space perception. In particular, sensitivity to audio-visual discrepancies (between a centrally located visual stimuli and laterally displaced audio cue) increases near the edge of the screen on which the visual cue is displayed. In other words,discrepancy detection thresholds are not fixed for a particular pair of stimuli, but are influenced by the size of the display space. Moreover, the discrepancy thresholds are influenced by scale as well as size. That is, the boundary of auditory space perception is not rigidly fixed on the boundaries of the screen; it also depends on the spatial relationship depicted. For example,the ventriloquism effect will break down within the boundaries of a large screen if zooming is used to exaggerate the proximity of the audience to the events. The latter effect appears to be much weaker than the former

    Automatic landmark annotation and dense correspondence registration for 3D human facial images

    Full text link
    Dense surface registration of three-dimensional (3D) human facial images holds great potential for studies of human trait diversity, disease genetics, and forensics. Non-rigid registration is particularly useful for establishing dense anatomical correspondences between faces. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is robust and highly accurate, even for different ethnicities. The average face is calculated for individuals of Han Chinese and Uyghur origins. While fully automatic and computationally efficient, this method enables high-throughput analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
    • 

    corecore