18 research outputs found
Gaze fixation improves the stability of expert juggling
Novice and expert jugglers employ different visuomotor strategies: whereas novices look at the balls around their zeniths, experts tend to fixate their gaze at a central location within the pattern (so-called gaze-through). A gaze-through strategy may reflect visuomotor parsimony, i.e., the use of simpler visuomotor (oculomotor and/or attentional) strategies as afforded by superior tossing accuracy and error corrections. In addition, the more stable gaze during a gaze-through strategy may result in more accurate movement planning by providing a stable base for gaze-centered neural coding of ball motion and movement plans or for shifts in attention. To determine whether a stable gaze might indeed have such beneficial effects on juggling, we examined juggling variability during 3-ball cascade juggling with and without constrained gaze fixation (at various depths) in expert performers (n = 5). Novice jugglers were included (n = 5) for comparison, even though our predictions pertained specifically to expert juggling. We indeed observed that experts, but not novices, juggled significantly less variable when fixating, compared to unconstrained viewing. Thus, while visuomotor parsimony might still contribute to the emergence of a gaze-through strategy, this study highlights an additional role for improved movement planning. This role may be engendered by gaze-centered coding and/or attentional control mechanisms in the brain
Temporal estimation with two moving objects: overt and covert pursuit
The current study examined temporal estimation in a prediction motion task where participants were cued to overtly pursue one of two moving objects, which could either arrive first, i.e., shortest [time to contact (TTC)] or second (i.e., longest TTC) after a period of occlusion. Participants were instructed to estimate TTC of the first-arriving object only, thus making it necessary to overtly pursue the cued object while at the same time covertly pursuing the other (non-cued) object. A control (baseline) condition was also included in which participants had to estimate TTC of a single, overtly pursued object. Results showed that participants were able to estimate the arrival order of the two objects with very high accuracy irrespective of whether they had overtly or covertly pursued the first-arriving object. However, compared to the single-object baseline, participants’ temporal estimation of the covert object was impaired when it arrived 500 ms before the overtly pursued object. In terms of eye movements, participants exhibited significantly more switches in gaze location during occlusion from the cued to the non-cued object but only when the latter arrived first. Still, comparison of trials with and without a switch in gaze location when the non-cued object arrived first indicated no advantage for temporal estimation. Taken together, our results indicate that overt pursuit is sufficient but not necessary for accurate temporal estimation. Covert pursuit can enable representation of a moving object’s trajectory and thereby accurate temporal estimation providing the object moves close to the overt attentional focus
Keeping an eye on noisy movements: On different approaches to perceptual-motor skill research and training
Contemporary theorising on the complementary nature of perception and action in expert performance has led to the emergence of different emphases in studying movement coordination and gaze behaviour. On the one hand, coordination research has examined the role that variability plays in movement control, evidencing that variability facilitates individualised adaptations during both learning and performance. On the other hand, and at odds with this principle, the majority of gaze behaviour studies have tended to average data over participants and trials, proposing the importance of universal 'optimal' gaze patterns in a given task, for all performers, irrespective of stage of learning. In this article, new lines of inquiry are considered with the aim of reconciling these two distinct approaches. The role that inter- and intra-individual variability may play in gaze behaviours is considered, before suggesting directions for future research
Recommended from our members
Gaze-grasp coordination in obstacle avoidance: differences between binocular and monocular viewing
Most adults can skillfully avoid potential obstacles when acting in everyday cluttered scenes. We examined how gaze and hand movements are normally coordinated for obstacle avoidance and whether these are altered when binocular depth information is unavailable. Visual fixations and hand movement kinematics were simultaneously recorded, while 13 right-handed subjects reached-to-precision grasp a cylindrical household object presented alone or with a potential obstacle (wine glass) located to its left (thumb's grasp side), right or just behind it (both closer to the finger's grasp side) using binocular or monocular vision. Gaze and hand movement strategies differed significantly by view and obstacle location. With binocular vision, initial fixations were near the target's centre of mass (COM) around the time of hand movement onset, but usually shifted to end just above the thumb's grasp site at initial object contact, this mainly being made by the thumb, consistent with selecting this digit for guiding the grasp. This strategy was associated with faster binocular hand movements and improved end-point grip precision across all trials than with monocular viewing, during which subjects usually continued to fixate the target closer to its COM despite a similar prevalence of thumb-first contacts. While subjects looked directly at the obstacle at each location on a minority of trials and their overall fixations on the target were somewhat biased towards the grasp side nearest to it, these gaze behaviours were particularly marked on monocular vision-obstacle behind trials which also commonly ended in finger-first contact. Subjects avoided colliding with the wine glass under both views when on the right (finger side) of the workspace by producing slower and straighter reaches, with this and the behind obstacle location also resulting in 'safer' (i.e. narrower) peak grip apertures and longer deceleration times than when the goal object was alone or the obstacle was on its thumb side. But monocular reach paths were more variable and deceleration times were selectively prolonged on finger-side and behind obstacle trials, with this latter condition further resulting in selectively increased grip closure times and corrections. Binocular vision thus provided added advantages for collision avoidance, known to require intact dorsal cortical stream processing mechanisms, particularly when the target of the grasp and potential obstacle to it were fairly closely separated in depth. Different accounts of the altered monocular gaze behaviour converged on the conclusion that additional perceptual and/or attentional resources are likely engaged compared to when continuous binocular depth information is available. Implications for people lacking binocular stereopsis are briefly considered
Bottlenecks of motion processing during a visual glance: the leaky flask model
YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)