55 research outputs found

    A detailed measure of apparent modulation induced by Cornsweet edges on a digital image display system

    Get PDF
    A digital image display based system was designed and implemented to perform visual measurements of apparent modulation induced by the Cornsweet illusion. The system was used to examine the results reported on previous experiments designed to measure the modulation transfer function of the human visual system. The reported results of experiments performed by Dooley and Greenfield on induced modulation were examined, and data not reported in that work was collected on the range of modulations over which observers perceived linear images. The system developed has the flexibility to be adapted for many different types of experiments. Visual experiments involving human observers reported in the literature often required the construction of specialized, complex mechanical devices to collect data about the visual system. In contrast, the system described here can be easily adapted for a broad range of experiments . In addition to the Cornsweet perception study described, the system was configured to study an experiment performed by Lowry and DePalma involving the perception of Mach Bands

    Motion tracking of iris features to detect small eye movements

    Get PDF
    The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift

    The wearable eyetracker: a tool for the study of high-level visual tasks

    Get PDF
    Even as the sophistication and power of computer-based vision systems is growing, the human visual system remains unsurpassed in many visual tasks. Vision delivers a rich representation of the environment without conscious effort, but the perception of a high resolution, wide field-of-view scene is largely an illusion made possible by the concentration of visual acuity near the center of gaze, coupled with a large, low-acuity periphery. Human observers are typically unaware of this extreme anisotropy because the visual system is equipped with a sophisticated oculomotor system that rapidly moves the eyes to sample the retinal image several times every second. The eye movements are programmed and executed at a level below conscious awareness, so self-report is an unreliable way to learn how trained observers perform complex visual tasks. Eye movements in controlled laboratory conditions have been studied extensively, but their utility as a metric of visual performance in real world, complex tasks, offers a powerful, under-utilized tool for the study of high-level visual processes. Recorded gaze patterns provide externally-visible markers to the spatial and temporal deployment of attention to objects and actions. In order to study vision in the real world, we have developed a self-contained, wearable eyetracker for monitoring complex tasks. The eyetracker can be worn for an extended period of time, does not restrict natural movements or behavior, and preserves peripheral vision. The wearable eyetracker can be used to study performance in a range of visual tasks, from situational awareness to directed visual search

    Portable Eyetracking: A Study of Natural Eye Movements

    Get PDF
    Visual perception, operating below conscious awareness, effortlessly provides the experience of a rich representation of the environment, continuous in space and time. Conscious visual perception is made possible by the \u27foveal compromise,\u27 the combination of the high-acuity fovea and a sophisticated suite of eye movements. Our illusory visual experience cannot be understood by introspection, but monitoring eye movements lets us probe the processes of visual perception. Four tasks representing a wide range of complexity were used to explore visual perception; image quality judgments, map reading, model building, and hand-washing. Very short fixation durations were observed in all tasks, some as short as 33 msec. While some tasks showed little variation in eye movement metrics, differences in eye movement patterns and high-level strategies were observed in the model building and hand-washing tasks. Performance in the hand-washing task revealed a new type of eye movement. \u27Planful\u27 eye movements were made to objects well in advance of a subject\u27s interaction with the object. Often occurring in the middle of another task, they provide \u27overlapping\u27 temporal information about the environment providing a mechanism to produce our conscious visual experience

    Spatio-Velocity CSF as a Function of Retinal Velocity Using Unstabilized Stimuli

    Get PDF
    LCD televisions have LC response times and hold-type data cycles that contribute to the appearance of blur when objects are in motion on the screen. New algorithms based on studies of the human visual system\u27s sensitivity to motion are being developed to compensate for these artifacts. This paper describes a series of experiments that incorporate eyetracking in the psychophysical determination of spatio-velocity contrast sensitivity in order to build on the 2D spatiovelocity contrast sensitivity function (CSF) model first described by Kelly and later refined by Daly. We explore whether the velocity of the eye has an additional effect on sensitivity and whether the model can be used to predict sensitivity to more complex stimuli. There were a total of five experiments performed in this research. The first four experiments utilized Gabor patterns with three different spatial and temporal frequencies and were used to investigate and/or populate the 2D spatio-velocity CSF. The fifth experiment utilized a disembodied edge and was used to validate the model. All experiments used a two interval forced choice (2IFC) method of constant stimuli guided by a QUEST routine to determine thresholds. The results showed that sensitivity to motion was determined by the retinal velocity produced by the Gabor patterns regardless of the type of motion of the eye. Based on the results of these experiments the parameters for the spatio-velocity CSF model were optimized to our experimental conditions

    Collecting and Analyzing Eye-Tracking Data in Outdoor Environments

    Get PDF
    Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, fixation detection, and gaze analysis all require procedures different from those typically used for indoor studies. Here, we discuss each of these challenges and present solutions, which together define a general method useful for investigations relying on outdoor eye-tracking data. We also discuss recommendations for improving the tools that are available, to further increase the accuracy and utility of outdoor eye-tracking data

    SR 66 Storm Sewer Tunnel Project in Evansville

    Get PDF

    Classroom Interpreting and Visual Information Processing in Mainstream Education for Deaf Students: Live or Memorex?

    Get PDF
    This study examined visual information processing and learning in classrooms including both deaf and hearing students. Of particular interest were the effects on deaf students’ learning of live (threedimensional) versus video-recorded (two-dimensional) sign language interpreting and the visual attention strategies of more and less experienced deaf signers exposed to simultaneous, multiple sources of visual information. Results from three experiments consistently indicated no differences in learning between three-dimensional and two-dimensional presentations among hearing or deaf students. Analyses of students’ allocation of visual attention and the influence of various demographic and experimental variables suggested considerable flexibility in deaf students’ receptive communication skills. Nevertheless, the findings also revealed a robust advantage in learning in favor of hearing students

    Using Human Observer Eye Movements in Automatic Image Classifiers

    Get PDF
    We explore the way in which people look at images of different semantic categories (e.g., handshake, landscape), and directly relate those results to computational approaches for automatic image classification. Our hypothesis is that the eye movements of human observers differ for images of different semantic categories, and that this information can be effectively used in automatic content-based classifiers. First, we present eye tracking experiments that show the variations in eye movements (i.e., fixations and saccades) across different individuals for images of 5 different categories: handshakes (two people shaking hands), crowd (cluttered scenes with many people), landscapes (nature scenes without people), main object in uncluttered background (e.g., an airplane flying), and miscellaneous (people and still lives). The eye tracking results suggest that similar viewing patterns occur when different subjects view different images in the same semantic category. Using these results, we examine how empirical data obtained from eye tracking experiments across different semantic categories can be integrated with existing computational frameworks, or used to construct new ones. In particular, we examine the Visual Apprentice, a system in which image classifiers are learned (using machine learning) from user input as the user defines a multiple level object definition hierarchy based on an object and its parts (scene, object, object-part, perceptual area, region), and labels examples for specific classes (e.g., handshake). The resulting classifiers are applied to automatically classify new images (e.g., as handshake/non-handshake). Although many eye tracking experiments have been performed, to our knowledge, this is the first study that specifically compares eye movements across categories, and that links categoryspecific eye tracking results to automatic image classification techniques
    • …
    corecore