264 research outputs found

    Illusory Motion Reveals Velocity Matching, Not Foveation, Drives Smooth Pursuit of Large Objects

    Get PDF
    When small objects move in a scene, we keep them foveated with smooth pursuit eye movements. Although large objects such as people and animals are common, it is nonetheless unknown how we pursue them since they cannot be foveated. It might be that the brain calculates an object’s centroid, and then centers the eyes on it during pursuit as a foveation mechanism might. Alternatively, the brain merely matches the velocity by motion integration. We test these alternatives with an illusory motion stimulus that translates at a speed different from its retinal motion. The stimulus was a Gabor array that translated at a fixed velocity, with component Gabors that drifted with motion consistent or inconsistent with the translation. Velocity matching predicts different pursuit behaviors across drift conditions, while centroid matching predicts no difference.We also tested whether pursuit can segregate and ignore irrelevant local drifts when motion and centroid information are consistent by surrounding the Gabors with solid frames. Finally, observers judged the global translational speed of the Gabors to determine whether smooth pursuit and motion perception share mechanisms. We found that consistent Gabor motion enhanced pursuit gain while inconsistent, opposite motion diminished it, drawing the eyes away from the center of the stimulus and supporting a motion-based pursuit drive. Catch-up saccades tended to counter the position offset, directing the eyes opposite to the deviation caused by the pursuit gain change. Surrounding the Gabors with visible frames canceled both the gain increase and the compensatory saccades. Perceived speed was modulated analogous to pursuit gain. The results suggest that smooth pursuit of large stimuli depends on the magnitude of integrated retinal motion information, not its retinal location, and that the position system might be unnecessary for generating smooth velocity to large pursuit targets

    A Subconscious Interaction Between Fixation and Anticipatory Pursuit

    Get PDF
    Ocular smooth pursuit and fixation are typically viewed as separate systems, yet there is evidence that the brainstem fixation system inhibits pursuit. Here we present behavioral evidence that the fixation system modulates pursuit behavior outside of conscious awareness. Human observers (male and female) either pursued a small spot that translated across a screen, or fixated it as it remained stationary. As shown previously, pursuit trials potentiated the oculomotor system, producing anticipatory eye velocity on the next trial before the target moved that mimicked the stimulus-driven velocity. Randomly interleaving fixation trials reduced anticipatory pursuit, suggesting that a potentiated fixation system interacted with pursuit to suppress eye velocity in upcoming pursuit trials. The reduction was not due to passive decay of the potentiated pursuit signal because interleaving “blank” trials in which no target appeared did not reduce anticipatory pursuit. Interspersed short fixation trials reduced anticipation on long pursuit trials, suggesting that fixation potentiation was stronger than pursuit potentiation. Furthermore, adding more pursuit trials to a block did not restore anticipatory pursuit, suggesting that fixation potentiation was not overridden by certainty of an imminent pursuit trial but rather was immune to conscious intervention. To directly test whether cognition can override fixation suppression, we alternated pursuit and fixation trials to perfectly specify trial identity. Still, anticipatory pursuit did not rise above that observed with an equal number of random fixation trials. The results suggest that potentiated fixation circuitry interacts with pursuit circuitry at a subconscious level to inhibit pursuit. SIGNIFICANCE STATEMENT When an object moves, we view it with smooth pursuit eye movements. When an object is stationary, we view it with fixational eye movements. Pursuit and fixation are historically regarded as controlled by different neural circuitry, and alternating between invoking them is thought to be guided by a conscious decision. However, our results show that pursuit is actively suppressed by prior fixation of a stationary object. This suppression is involuntary, and cannot be avoided even if observers are certain that the object will move. The results suggest that the neural fixation circuitry is potentiated by engaging stationary objects, and interacts with pursuit outside of conscious awareness

    Motion Integration for Ocular Pursuit Does Not Hinder Perceptual Segregation of Moving Objects

    Get PDF
    When confronted with a complex moving stimulus, the brain can integrate local element velocities to obtain a single motion signal, or segregate the elements to maintain awareness of their identities. The integrated motion signal can drive smooth-pursuit eye movements (Heinen and Watamaniuk, 1998), whereas the segregated signal guides attentive tracking of individual elements in multiple-object tracking tasks (MOT; Pylyshyn and Storm, 1988). It is evident that these processes can occur simultaneously, because we can effortlessly pursue ambulating creatures while inspecting disjoint moving features, such as arms and legs, but the underlying mechanism is unknown. Here, we provide evidence that separate neural circuits perform the mathematically opposed operations of integration and segregation, by demonstrating with a dual-task paradigm that the two processes do not share attentional resources. Human observers attentively tracked a subset of target elements composing a small MOT stimulus, while pursuing it ocularly as it translated across a computer display. Integration of the multidot stimulus yielded optimal pursuit. Importantly, performing MOT while pursuing the stimulus did not degrade performance on either task compared with when each was performed alone, indicating that they did not share attention. A control experiment showed that pursuit was not driven by integration of only the nontargets, leaving the MOT targets free for segregation. Nor was a predictive strategy used to pursue the stimulus, because sudden changes in its global velocity were accurately followed. The results suggest that separate neural mechanisms can simultaneously segregate and integrate the same motion signals

    The Use of the Four Square Step Test and the Y Balance Test to Assess Balance in Typical Children Ages 6-10 Years

    Get PDF
    The Bruininks-Oseretsky Test of Motor Proficiency, 2nd edition (BOT-2) is a widely used standardized tool to assess gross motor function, including balance, in children ages 4-21.The Four Square Step Test (FSST) was developed as a reliable assessment tool to assess fall risk in the geriatric population, however there is limited research on its use in the pediatric population. The Y-Balance Test (YBT) was developed to detect functional deficits in the athletic population, it is unknown if this is a reliable test in the pediatric population.https://ecommons.udayton.edu/dpt_symposium/1018/thumbnail.jp

    A Covered Eye Fails To Follow an Object Moving in Depth

    Get PDF
    To clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object\u27s motion in depth. The viewing eye always followed the target\u27s motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage

    Optometric Measurements Predict Performance but Not Comfort on a Virtual Object Placement Task With a Stereoscopic 3D Display

    Get PDF
    Twelve participants were tested on a simple virtual object precision placement task while viewing a stereoscopic 3D (S3D) display. Inclusion criteria included uncorrected or best corrected vision of 20/20 or better in each eye and stereopsis of at least 40 arc sec using the Titmus stereo test. Additionally, binocular function was assessed, including measurements of distant and near phoria (horizontal and vertical) and distant and near horizontal fusion ranges using standard optometric clinical techniques. Before each of six 30 minute experimental sessions, measurements of phoria and fusion ranges were repeated using a Keystone View Telebinocular and an S3D display, respectively. All participants completed experimental sessions in which the task required the precision placement of a virtual object in depth at the same location as a target object. Subjective discomfort was assessed using the Simulator Sickness Questionnaire (SSQ). Individual placement accuracy in S3D trials was significantly correlated with several of the binocular screening outcomes: viewers with larger convergent fusion ranges (measured at near distance), larger total fusion ranges (convergent plus divergent ranges, measured at near distance), and/or lower (better) stereoscopic acuity thresholds were more accurate on the placement task. No screening measures were predictive of subjective discomfort, perhaps due to the low levels of discomfort induced

    Integration across time determines path deviation discrimination for moving objects.

    Get PDF
    YesBackground: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.Wellcome Trust, Leverhulme Trust, NI
    corecore