82 research outputs found

    Visual Learning in Multiple-Object Tracking

    Get PDF
    Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order

    Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors

    Get PDF
    Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (“close encounters”) when speed is increased, resulting in more target–distractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pair’s midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of target–distractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds

    How voluntary actions modulate time perception

    Get PDF
    Distortions of time perception are generally explained either by variations in the rate of pacing signals of an “internal clock”, or by lag-adaptation mechanisms that recalibrate the perceived time of one event relative to another. This study compares these accounts directly for one temporal illusion: the subjective compression of the interval between voluntary actions and their effects, known as ‘intentional binding’. Participants discriminated whether two cutaneous stimuli presented after voluntary or passive movements were simultaneous or successive. In other trials, they judged the temporal interval between their movement and an ensuing tone. Temporal discrimination was impaired following voluntary movements compared to passive movements early in the action-tone interval. In a control experiment, active movements without subsequent tones produced no impairment in temporal discrimination. These results suggest that voluntary actions transiently slow down an internal clock during the action-effect interval. This in turn leads to intentional binding, and links the effects of voluntary actions to the self

    Temporal estimation with two moving objects: overt and covert pursuit

    Get PDF
    The current study examined temporal estimation in a prediction motion task where participants were cued to overtly pursue one of two moving objects, which could either arrive first, i.e., shortest [time to contact (TTC)] or second (i.e., longest TTC) after a period of occlusion. Participants were instructed to estimate TTC of the first-arriving object only, thus making it necessary to overtly pursue the cued object while at the same time covertly pursuing the other (non-cued) object. A control (baseline) condition was also included in which participants had to estimate TTC of a single, overtly pursued object. Results showed that participants were able to estimate the arrival order of the two objects with very high accuracy irrespective of whether they had overtly or covertly pursued the first-arriving object. However, compared to the single-object baseline, participants’ temporal estimation of the covert object was impaired when it arrived 500 ms before the overtly pursued object. In terms of eye movements, participants exhibited significantly more switches in gaze location during occlusion from the cued to the non-cued object but only when the latter arrived first. Still, comparison of trials with and without a switch in gaze location when the non-cued object arrived first indicated no advantage for temporal estimation. Taken together, our results indicate that overt pursuit is sufficient but not necessary for accurate temporal estimation. Covert pursuit can enable representation of a moving object’s trajectory and thereby accurate temporal estimation providing the object moves close to the overt attentional focus

    Shared attention for action selection and action monitoring in goal-directed reaching

    Get PDF
    Dual-task studies have shown higher sensitivity for stimuli presented at the targets of upcoming actions. We examined whether attention is directed to action targets for the purpose of action selection, or if attention is directed to these locations because they are expected to provide feedback about movement outcomes. In our experiment, endpoint accuracy feedback was spatially separated from the action targets to determine whether attention would be allocated to (a) the action targets, (b) the expected source of feedback, or (c) to both locations. Participants reached towards a location indicated by an arrow while identifying a discrimination target that could appear in any one of eight possible locations. Discrimination target accuracy was used as a measure of attention allocation. Participants were unable to see their hand during reaching and were provided with a small monetary reward for each accurate movement. Discrimination target accuracy was best at action targets but was also enhanced at the spatially separated feedback locations. Separating feedback from the reaching targets did not diminish discrimination accuracy at the movement targets but did result in delayed movement initiation and reduced reaching accuracy, relative to when feedback was presented at the reaching target. The results suggest attention is required for both action planning and monitoring movement outcomes. Dividing attention between these functions negatively impacts action performance

    The influence of visual flow and perceptual load on locomotion speed

    Get PDF
    Visual flow is used to perceive and regulate movement speed during locomotion. We assessed the extent to which variation in flow from the ground plane, arising from static visual textures, influences locomotion speed under conditions of concurrent perceptual load. In two experiments, participants walked over a 12-m projected walkway that consisted of stripes that were oriented orthogonal to the walking direction. In the critical conditions, the frequency of the stripes increased or decreased. We observed small, but consistent effects on walking speed, so that participants were walking slower when the frequency increased compared to when the frequency decreased. This basic effect suggests that participants interpreted the change in visual flow in these conditions as at least partly due to a change in their own movement speed, and counteracted such a change by speeding up or slowing down. Critically, these effects were magnified under conditions of low perceptual load and a locus of attention near the ground plane. Our findings suggest that the contribution of vision in the control of ongoing locomotion is relatively fluid and dependent on ongoing perceptual (and perhaps more generally cognitive) task demands

    Distract yourself: prediction of salient distractors by own actions and external cues.

    Get PDF
    Distracting sensory events can capture attention, interfering with the performance of the task at hand. We asked: is our attention captured by such events if we cause them ourselves? To examine this, we employed a visual search task with an additional salient singleton distractor, where the distractor was predictable either by the participant's own (motor) action or by an endogenous cue; accordingly, the task was designed to isolate the influence of motor and non-motor predictive processes. We found both types of prediction, cue- and action-based, to attenuate the interference of the distractor-which is at odds with the "attentional white bear" hypothesis, which states that prediction of distracting stimuli mandatorily directs attention towards them. Further, there was no difference between the two types of prediction. We suggest this pattern of results may be better explained by theories postulating general predictive mechanisms, such as the framework of predictive processing, as compared to accounts proposing a special role of action-effect prediction, such as theories based on optimal motor control. However, rather than permitting a definitive decision between competing theories, our study highlights a number of open questions, to be answered by these theories, with regard to how exogenous attention is influenced by predictions deriving from the environment versus our own actions

    Bottlenecks of motion processing during a visual glance: the leaky flask model

    Get PDF
    YesWhere do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.Supported by R01 EY018165 and P30 EY007551 from the National Institutes of Health (NIH)
    corecore