101,437 research outputs found

    Motion from Fixation

    Get PDF
    We study the problem of estimating rigid motion from a sequence of monocular perspective images obtained by navigating around an object while fixating a particular feature point. The motivation comes from the mechanics of the buman eye, which either pursuits smoothly some fixation point in the scene, or "saccades" between different fixation points. In particular, we are interested in understanding whether fixation helps the process of estimating motion in the sense that it makes it more robust, better conditioned or simpler to solve. We cast the problem in the framework of "dynamic epipolar geometry", and propose an implicit dynamical model for recursively estimating motion from fixation. This allows us to compare directly the quality of the estimates of motion obtained by imposing the fixation constraint, or by assuming a general rigid motion, simply by changing the geometry of the parameter space while maintaining the same structure of the recursive estimator. We also present a closed-form static solution from two views, and a recursive estimator of the absolute attitude between the viewer and the scene. One important issue is how do the estimates degrade in presence of disturbances in the tracking procedure. We describe a simple fixation control that converges exponentially, which is complemented by a image shift-registration for achieving sub-pixel accuracy, and assess how small deviations from perfect tracking affect the estimates of motion

    Motion from "X" by Compensating "Y"

    Get PDF
    This paper analyzes the geometry of the visual motion estimation problem in relation to transformations of the input (images) that stabilize particular output functions such as the motion of a point, a line and a plane in the image. By casting the problem within the popular "epipolar geometry", we provide a common framework for including constraints such as point, line of plane fixation by just considering "slices" of the parameter manifold. The models we provide can be used for estimating motion from a batch using the preferred optimization techniques, or for defining dynamic filters that estimate motion from a causal sequence. We discuss methods for performing the necessary compensation by either controlling the support of the camera or by pre-processing the images. The compensation algorithms may be used also for recursively fitting a plane in 3-D both from point-features or directly from brightness. Conversely, they may be used for estimating motion relative to the plane independent of its parameters

    Miniature Eye Movements Enhance Fine Spatial Details

    Full text link
    Our eyes are constantly in motion. Even during visual fixation, small eye movements continually jitter the location of gaze. It is known that visual percepts tend to fade when retinal image motion is eliminated in the laboratory. However, it has long been debated whether, during natural viewing, fixational eye movements have functions in addition to preventing the visual scene from fading. In this study, we analysed the influence in humans of fixational eye movements on the discrimination of gratings masked by noise that has a power spectrum similar to that of natural images. Using a new method of retinal image stabilization18, we selectively eliminated the motion of the retinal image that normally occurs during the intersaccadic intervals of visual fixation. Here we show that fixational eye movements improve discrimination of high spatial frequency stimuli, but not of low spatial frequency stimuli. This improvement originates from the temporal modulations introduced by fixational eye movements in the visual input to the retina, which emphasize the high spatial frequency harmonics of the stimulus. In a natural visual world dominated by low spatial frequencies, fixational eye movements appear to constitute an effective sampling strategy by which the visual system enhances the processing of spatial detail.National Institutes of Health; National Science Foundatio

    Attenuation of perceived motion smear during vergence and pursuit tracking

    Get PDF
    AbstractWhen the eyes move, the images of stationary objects sweep across the retina. Despite this motion of the retinal image and the substantial integration of visual signals across time, physically stationary objects typically do not appear to be smeared during eye movements. Previous studies indicated that the extent of perceived motion smear is smaller when a stationary target is presented during pursuit or saccadic eye movements than when comparable motion of the retinal image occurs during steady fixation. In this study, we compared the extent of perceived motion smear for a stationary target during smooth pursuit and vergence eye movements with that for a physically moving target during fixation. For a target duration of 100 ms or longer, perceived motion smear is substantially less when the motion of the retinal image results from vergence or pursuit eye movements than when it results from the motion of a target during fixation. The reduced extent of perceived motion smear during eye movements compared to fixation cannot be accounted for by different spatio-temporal interactions between visual targets or by unequal attention to the moving test spot under these two types of conditions. We attribute the highly similar attenuation of perceived smear during vergence and pursuit to a comparable action of the extra-retinal signals for disjunctive and conjugate eye movements

    Oculometric Indices of Simulator and Aircraft Motion

    Get PDF
    In a series of three experiments on the effects on eye-scan behavior of both simulator and aircraft motion, the sensitivity of an oculometric measure to motion effects was demonstrated. Fixation Time , defined as the time the eyes spend at a particular location before moving on (saccade) to another fixation point, was found to be sensitive to motion effects in each of the experiments conducted. The first experiment studied differences between simulator motion and no-motion conditions during a series of simulated Instrument Landing System (ILS) approaches. The mean fixation time for the no-motion condition was found to be significantly longer than for the motion condition for the five pilots tested. This was true particularly for the Flight Director, the instrument supplying attitude and deviation from glideslope information. A second experiment investigated eye-scan parameters based on data collected in flight, with the oculometer in the NASA Transport Systems Research Vehicle (TSRV), and in the fixed base TSRV simulator. The results of this study were similar to those of the first study, and showed fixation time and rate measures to be sensitive to motion (flight) and no-motion. Motion effects were most evident when the subject was viewing a display supplying attitude and flight path information. A third study addressed the question of the nature of the information provided by motion. Utilizing a part-task (monitoring one instrument), with motion in only one dimension (pitch) ten subjects were tested in no-motion, correct motion, and reversed direction motion conditions. The mean fixation times for the no motion condition were significantly longer than for either motion condition, while the two motion conditions did not differ significantly. The results of the present series of experiments support the hypothesis that motion serves an altering function, providing a cue or clue to the pilot that something happened . The results do not support the hypothesis that direction of motion is conveyed through this type of motion information. The results suggest that simulation without motion cues may represent an understatement of the true capacity of the pilot

    Volitional control of anticipatory ocular smooth pursuit after viewing, but not pursuing, a moving target: evidence for a re-afferent velocity store

    Get PDF
    Although human subjects cannot normally initiate smooth eye movements in the absence of a moving target, previous experiments have established that such movements can be evoked if the subject is required to pursue a regularly repeated, transient target motion stimulus. We sought to determine whether active pursuit was necessary to evoke such an anticipatory response or whether it could be induced after merely viewing the target motion. Subjects were presented with a succession of ramp target motion stimuli of identical velocity and alternating direction in the horizontal axis. In initial experiments, the target was exposed for only 120 ms as it passed through centre, with a constant interval between presentations. Ramp velocity was varied from +/- 9 to 45 degrees/s in one set of trials; the interval between ramp presentations was varied from 640 to 1920 ms in another. Subjects were instructed either to pursue the moving target from the first presentation or to hold fixation on another, stationary target during the first one, two or three presentations of the moving display. Without fixation, the first smooth movement was initiated with a mean latency of 95 ms after target onset, but with repeated presentations anticipatory smooth movements started to build up before target onset. In contrast, when the subjects fixated the stationary target for three presentations of the moving target, the first movement they made was already anticipatory and had a peak velocity that was significantly greater than that of the first response without prior fixation. The conditions of experiment 1 were repeated in experiment 3 with a longer duration of target exposure (480 ms), to allow higher eye velocities to build up. Again, after three prior fixations, the anticipatory velocity measured at 100 ms after target onset (when visual feedback would be expected to start) was not significantly different to that evoked after the subjects had made three active pursuit responses to the same target motion, reaching a mean of 20 degrees/s for a 50 degrees/s target movement. In a further experiment, we determined whether subjects could use stored information from prior active pursuit to generate anticipatory pursuit in darkness if there was a high expectancy that the target would reappear with identical velocity. Subjects made one predictive response immediately after target disappearance, but very little response thereafter until the time at which they expected the target to reappear, when they were again able to re-vitalize the anticipatory response before target appearance. The findings of these experiments provide evidence that information related to target velocity can be stored and used to generate future anticipatory responses even in the absence of eye movement. This suggests that information for storage is probably derived from a common pre-motor drive signal that is inhibited during fixation, rather than an efference copy of eye movement itself. Furthermore, a high level of expectancy of target appearance can facilitate the release of this stored information in darkness

    Visuomotor transformation for interception: catching while fixating

    Get PDF
    Catching a ball involves a dynamic transformation of visual information about ball motion into motor commands for moving the hand to the right place at the right time. We previously formulated a neural model for this transformation to account for the consistent leftward movement biases observed in our catching experiments. According to the model, these biases arise within the representation of target motion as well as within the transformation from a gaze-centered to a body-centered movement command. Here, we examine the validity of the latter aspect of our model in a catching task involving gaze fixation. Gaze fixation should systematically influence biases in catching movements, because in the model movement commands are only generated in the direction perpendicular to the gaze direction. Twelve participants caught balls while gazing at a fixation point positioned either straight ahead or 14Β° to the right. Four participants were excluded because they could not adequately maintain fixation. We again observed a consistent leftward movement bias, but the catching movements were unaffected by fixation direction. This result refutes our proposal that the leftward bias partly arises within the visuomotor transformation, and suggests instead that the bias predominantly arises within the early representation of target motion, specifically through an imbalance in the represented radial and azimuthal target motion

    Effects of Intraframe Distortion on Measures of Cone Mosaic Geometry from Adaptive Optics Scanning Light Ophthalmoscopy

    Get PDF
    Purpose: To characterize the effects of intraframe distortion due to involuntary eye motion on measures of cone mosaic geometry derived from adaptive optics scanning light ophthalmoscope (AOSLO) images. Methods: We acquired AOSLO image sequences from 20 subjects at 1.0, 2.0, and 5.08 temporal from fixation. An expert grader manually selected 10 minimally distorted reference frames from each 150-frame sequence for subsequent registration. Cone mosaic geometry was measured in all registered images (n ΒΌ 600) using multiple metrics, and the repeatability of these metrics was used to assess the impact of the distortions from each reference frame. In nine additional subjects, we compared AOSLO-derived measurements to those from adaptive optics (AO)-fundus images, which do not contain system-imposed intraframe distortions. Results: We observed substantial variation across subjects in the repeatability of density (1.2%–8.7%), inter-cell distance (0.8%–4.6%), percentage of six-sided Voronoi cells (0.8%–10.6%), and Voronoi cell area regularity (VCAR) (1.2%–13.2%). The average of all metrics extracted from AOSLO images (with the exception of VCAR) was not significantly different than those derived from AO-fundus images, though there was variability between individual images. Conclusions: Our data demonstrate that the intraframe distortion found in AOSLO images can affect the accuracy and repeatability of cone mosaic metrics. It may be possible to use multiple images from the same retinal area to approximate a β€˜β€˜distortionless’’ image, though more work is needed to evaluate the feasibility of this approach. Translational Relevance: Even in subjects with good fixation, images from AOSLOs contain intraframe distortions due to eye motion during scanning. The existence of these artifacts emphasizes the need for caution when interpreting results derived from scanning instruments
    • …
    corecore