137 research outputs found

    Incorporating Prediction in Models for Two-Dimensional Smooth Pursuit

    Get PDF
    A predictive component can contribute to the command signal for smooth pursuit. This is readily demonstrated by the fact that low frequency sinusoidal target motion can be tracked with zero time delay or even with a small lead. The objective of this study was to characterize the predictive contributions to pursuit tracking more precisely by developing analytical models for predictive smooth pursuit. Subjects tracked a small target moving in two dimensions. In the simplest case, the periodic target motion was composed of the sums of two sinusoidal motions (SS), along both the horizontal and the vertical axes. Motions following the same or similar paths, but having a richer spectral composition, were produced by having the target follow the same path but at a constant speed (CS), and by combining the horizontal SS velocity with the vertical CS velocity and vice versa. Several different quantitative models were evaluated. The predictive contribution to the eye tracking command signal could be modeled as a low-pass filtered target acceleration signal with a time delay. This predictive signal, when combined with retinal image velocity at the same time delay, as in classical models for the initiation of pursuit, gave a good fit to the data. The weighting of the predictive acceleration component was different in different experimental conditions, being largest when target motion was simplest, following the SS velocity profiles

    Extraction of visual motion information for the control of eye and head movement during head-free pursuit

    Get PDF
    We investigated how effectively briefly presented visual motion could be assimilated and used to track future target motion with head and eyes during target disappearance. Without vision, continuation of eye and head movement is controlled by internal (extra-retinal) mechanisms, but head movement stimulates compensatory vestibulo-ocular reflex (VOR) responses that must be countermanded for gaze to remain in the direction of target motion. We used target exposures of 50–200 ms at the start of randomised step-ramp stimuli, followed by >400 ms of target disappearance, to investigate the ability to sample target velocity and subsequently generate internally controlled responses. Subjects could appropriately grade gaze velocity to different target velocities without visual feedback, but responses were fully developed only when exposure was >100 ms. Gaze velocities were sustained or even increased during target disappearance, especially when there was expectation of target reappearance, but they were always less than for controls, where the target was continuously visible. Gaze velocity remained in the direction of target motion throughout target extinction, implying that compensatory (VOR) responses were suppressed by internal drive mechanisms. Regression analysis revealed that the underlying compensatory response remained active, but with gain slightly less than unity (0.85), resulting in head-free gaze responses that were very similar to, but slightly greater than, head-fixed. The sampled velocity information was also used to grade head velocity, but in contrast to gaze, head velocity was similar whether the target was briefly or continuously presented, suggesting that head motion was controlled by internal mechanisms alone, without direct influence of visual feedback

    Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets

    Get PDF
    Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively

    Covert Tracking: A Combined ERP and Fixational Eye Movement Study

    Get PDF
    Attention can be directed to particular spatial locations, or to objects that appear at anticipated points in time. While most work has focused on spatial or temporal attention in isolation, we investigated covert tracking of smoothly moving objects, which requires continuous coordination of both. We tested two propositions about the neural and cognitive basis of this operation: first that covert tracking is a right hemisphere function, and second that pre-motor components of the oculomotor system are responsible for driving covert spatial attention during tracking. We simultaneously recorded event related potentials (ERPs) and eye position while participants covertly tracked dots that moved leftward or rightward at 12 or 20°/s. ERPs were sensitive to the direction of target motion. Topographic development in the leftward motion was a mirror image of the rightward motion, suggesting that both hemispheres contribute equally to covert tracking. Small shifts in eye position were also lateralized according to the direction of target motion, implying covert activation of the oculomotor system. The data addresses two outstanding questions about the nature of visuospatial tracking. First, covert tracking is reliant upon a symmetrical frontoparietal attentional system, rather than being right lateralized. Second, this same system controls both pursuit eye movements and covert tracking

    Does oculomotor inhibition of return influence fixation probability during scene search?

    Get PDF
    Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This “foraging” hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search

    Dissociable Modulation of Overt Visual Attention in Valence and Arousal Revealed by Topology of Scan Path

    Get PDF
    Emotional stimuli have evolutionary significance for the survival of organisms; therefore, they are attention-grabbing and are processed preferentially. The neural underpinnings of two principle emotional dimensions in affective space, valence (degree of pleasantness) and arousal (intensity of evoked emotion), have been shown to be dissociable in the olfactory, gustatory and memory systems. However, the separable roles of valence and arousal in scene perception are poorly understood. In this study, we asked how these two emotional dimensions modulate overt visual attention. Twenty-two healthy volunteers freely viewed images from the International Affective Picture System (IAPS) that were graded for affective levels of valence and arousal (high, medium, and low). Subjects' heads were immobilized and eye movements were recorded by camera to track overt shifts of visual attention. Algebraic graph-based approaches were introduced to model scan paths as weighted undirected path graphs, generating global topology metrics that characterize the algebraic connectivity of scan paths. Our data suggest that human subjects show different scanning patterns to stimuli with different affective ratings. Valence salient stimuli (with neutral arousal) elicited faster and larger shifts of attention, while arousal salient stimuli (with neutral valence) elicited local scanning, dense attention allocation and deep processing. Furthermore, our model revealed that the modulatory effect of valence was linearly related to the valence level, whereas the relation between the modulatory effect and the level of arousal was nonlinear. Hence, visual attention seems to be modulated by mechanisms that are separate for valence and arousal

    Listening to music reduces eye movements

    Get PDF
    Listening to music can change the way that people visually experience the environment, probably as a result of an inwardly directed shift of attention. We investigated whether this attentional shift can be demonstrated by reduced eye movement activity, and if so, whether that reduction depends on absorption. Participants listened to their preferred music, to unknown neutral music, or to no music while viewing a visual stimulus (a picture or a film clip). Preference and absorption were significantly higher for the preferred music than for the unknown music. Participants exhibited longer fixations, fewer saccades, and more blinks when they listened to music than when they sat in silence. However, no differences emerged between the preferred music condition and the neutral music condition. Thus, music significantly reduces eye movement activity, but an attentional shift from the outer to the inner world (i.e., to the emotions and memories evoked by the music) emerged as only one potential explanation. Other explanations, such as a shift of attention from visual to auditory input, are discussed
    corecore