100 research outputs found
Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors
Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (âclose encountersâ) when speed is increased, resulting in more targetâdistractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pairâs midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of targetâdistractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds
Visual Learning in Multiple-Object Tracking
Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order
How voluntary actions modulate time perception
Distortions of time perception are generally explained either by variations in the rate of pacing signals of an âinternal clockâ, or by lag-adaptation mechanisms that recalibrate the perceived time of one event relative to another. This study compares these accounts directly for one temporal illusion: the subjective compression of the interval between voluntary actions and their effects, known as âintentional bindingâ. Participants discriminated whether two cutaneous stimuli presented after voluntary or passive movements were simultaneous or successive. In other trials, they judged the temporal interval between their movement and an ensuing tone. Temporal discrimination was impaired following voluntary movements compared to passive movements early in the action-tone interval. In a control experiment, active movements without subsequent tones produced no impairment in temporal discrimination. These results suggest that voluntary actions transiently slow down an internal clock during the action-effect interval. This in turn leads to intentional binding, and links the effects of voluntary actions to the self
Temporal estimation with two moving objects: overt and covert pursuit
The current study examined temporal estimation in a prediction motion task where participants were cued to overtly pursue one of two moving objects, which could either arrive first, i.e., shortest [time to contact (TTC)] or second (i.e., longest TTC) after a period of occlusion. Participants were instructed to estimate TTC of the first-arriving object only, thus making it necessary to overtly pursue the cued object while at the same time covertly pursuing the other (non-cued) object. A control (baseline) condition was also included in which participants had to estimate TTC of a single, overtly pursued object. Results showed that participants were able to estimate the arrival order of the two objects with very high accuracy irrespective of whether they had overtly or covertly pursued the first-arriving object. However, compared to the single-object baseline, participantsâ temporal estimation of the covert object was impaired when it arrived 500Â ms before the overtly pursued object. In terms of eye movements, participants exhibited significantly more switches in gaze location during occlusion from the cued to the non-cued object but only when the latter arrived first. Still, comparison of trials with and without a switch in gaze location when the non-cued object arrived first indicated no advantage for temporal estimation. Taken together, our results indicate that overt pursuit is sufficient but not necessary for accurate temporal estimation. Covert pursuit can enable representation of a moving objectâs trajectory and thereby accurate temporal estimation providing the object moves close to the overt attentional focus
The influence of visual flow and perceptual load on locomotion speed
Visual flow is used to perceive and regulate movement speed during locomotion. We assessed the extent to which variation in flow from the ground plane, arising from static visual textures, influences locomotion speed under conditions of concurrent perceptual load. In two experiments, participants walked over a 12-m projected walkway that consisted of stripes that were oriented orthogonal to the walking direction. In the critical conditions, the frequency of the stripes increased or decreased. We observed small, but consistent effects on walking speed, so that participants were walking slower when the frequency increased compared to when the frequency decreased. This basic effect suggests that participants interpreted the change in visual flow in these conditions as at least partly due to a change in their own movement speed, and counteracted such a change by speeding up or slowing down. Critically, these effects were magnified under conditions of low perceptual load and a locus of attention near the ground plane. Our findings suggest that the contribution of vision in the control of ongoing locomotion is relatively fluid and dependent on ongoing perceptual (and perhaps more generally cognitive) task demands
Recommended from our members
Gaze-grasp coordination in obstacle avoidance: differences between binocular and monocular viewing
Most adults can skillfully avoid potential obstacles when acting in everyday cluttered scenes. We examined how gaze and hand movements are normally coordinated for obstacle avoidance and whether these are altered when binocular depth information is unavailable. Visual fixations and hand movement kinematics were simultaneously recorded, while 13 right-handed subjects reached-to-precision grasp a cylindrical household object presented alone or with a potential obstacle (wine glass) located to its left (thumb's grasp side), right or just behind it (both closer to the finger's grasp side) using binocular or monocular vision. Gaze and hand movement strategies differed significantly by view and obstacle location. With binocular vision, initial fixations were near the target's centre of mass (COM) around the time of hand movement onset, but usually shifted to end just above the thumb's grasp site at initial object contact, this mainly being made by the thumb, consistent with selecting this digit for guiding the grasp. This strategy was associated with faster binocular hand movements and improved end-point grip precision across all trials than with monocular viewing, during which subjects usually continued to fixate the target closer to its COM despite a similar prevalence of thumb-first contacts. While subjects looked directly at the obstacle at each location on a minority of trials and their overall fixations on the target were somewhat biased towards the grasp side nearest to it, these gaze behaviours were particularly marked on monocular vision-obstacle behind trials which also commonly ended in finger-first contact. Subjects avoided colliding with the wine glass under both views when on the right (finger side) of the workspace by producing slower and straighter reaches, with this and the behind obstacle location also resulting in 'safer' (i.e. narrower) peak grip apertures and longer deceleration times than when the goal object was alone or the obstacle was on its thumb side. But monocular reach paths were more variable and deceleration times were selectively prolonged on finger-side and behind obstacle trials, with this latter condition further resulting in selectively increased grip closure times and corrections. Binocular vision thus provided added advantages for collision avoidance, known to require intact dorsal cortical stream processing mechanisms, particularly when the target of the grasp and potential obstacle to it were fairly closely separated in depth. Different accounts of the altered monocular gaze behaviour converged on the conclusion that additional perceptual and/or attentional resources are likely engaged compared to when continuous binocular depth information is available. Implications for people lacking binocular stereopsis are briefly considered
Shared attention for action selection and action monitoring in goal-directed reaching
Dual-task studies have shown higher sensitivity for stimuli presented at the targets of upcoming actions. We examined whether attention is directed to action targets for the purpose of action selection, or if attention is directed to these locations because they are expected to provide feedback about movement outcomes. In our experiment, endpoint accuracy feedback was spatially separated from the action targets to determine whether attention would be allocated to (a) the action targets, (b) the expected source of feedback, or (c) to both locations. Participants reached towards a location indicated by an arrow while identifying a discrimination target that could appear in any one of eight possible locations. Discrimination target accuracy was used as a measure of attention allocation. Participants were unable to see their hand during reaching and were provided with a small monetary reward for each accurate movement. Discrimination target accuracy was best at action targets but was also enhanced at the spatially separated feedback locations. Separating feedback from the reaching targets did not diminish discrimination accuracy at the movement targets but did result in delayed movement initiation and reduced reaching accuracy, relative to when feedback was presented at the reaching target. The results suggest attention is required for both action planning and monitoring movement outcomes. Dividing attention between these functions negatively impacts action performance
Distract yourself: prediction of salient distractors by own actions and external cues.
Distracting sensory events can capture attention, interfering with the performance of the task at hand. We asked: is our attention captured by such events if we cause them ourselves? To examine this, we employed a visual search task with an additional salient singleton distractor, where the distractor was predictable either by the participant's own (motor) action or by an endogenous cue; accordingly, the task was designed to isolate the influence of motor and non-motor predictive processes. We found both types of prediction, cue- and action-based, to attenuate the interference of the distractor-which is at odds with the "attentional white bear" hypothesis, which states that prediction of distracting stimuli mandatorily directs attention towards them. Further, there was no difference between the two types of prediction. We suggest this pattern of results may be better explained by theories postulating general predictive mechanisms, such as the framework of predictive processing, as compared to accounts proposing a special role of action-effect prediction, such as theories based on optimal motor control. However, rather than permitting a definitive decision between competing theories, our study highlights a number of open questions, to be answered by these theories, with regard to how exogenous attention is influenced by predictions deriving from the environment versus our own actions
- âŠ