88 research outputs found

    Visual Learning in Multiple-Object Tracking

    Get PDF
    Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order

    Changing Human Visual Field Organization from Early Visual to Extra-Occipital Cortex

    Get PDF
    BACKGROUND: The early visual areas have a clear topographic organization, such that adjacent parts of the cortical surface represent distinct yet adjacent parts of the contralateral visual field. We examined whether cortical regions outside occipital cortex show a similar organization. METHODOLOGY/PRINCIPAL FINDINGS: The BOLD responses to discrete visual field locations that varied in both polar angle and eccentricity were measured using two different tasks. As described previously, numerous occipital regions are both selective for the contralateral visual field and show topographic organization within that field. Extra-occipital regions are also selective for the contralateral visual field, but possess little (or no) topographic organization. A regional analysis demonstrates that this weak topography is not due to increased receptive field size in extra-occipital areas. CONCLUSIONS/SIGNIFICANCE: A number of extra-occipital areas are identified that are sensitive to visual field location. Neurons in these areas corresponding to different locations in the contralateral visual field do not demonstrate any regular or robust topographic organization, but appear instead to be intermixed on the cortical surface. This suggests a shift from processing that is predominately local in visual space, in occipital areas, to global, in extra-occipital areas. Global processing fits with a role for these extra-occipital areas in selecting a spatial locus for attention and/or eye-movements

    Speed has an effect on multiple-object tracking independently of the number of close encounters between targets and distractors

    Get PDF
    Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (“close encounters”) when speed is increased, resulting in more target–distractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pair’s midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of target–distractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds

    Distribution of Attention Modulates Salience Signals in Early Visual Cortex

    Get PDF
    Previous research has shown that the extent to which people spread attention across the visual field plays a crucial role in visual selection and the occurrence of bottom-up driven attentional capture. Consistent with previous findings, we show that when attention was diffusely distributed across the visual field while searching for a shape singleton, an irrelevant salient color singleton captured attention. However, while using the very same displays and task, no capture was observed when observers initially focused their attention at the center of the display. Using event-related fMRI, we examined the modulation of retinotopic activity related to attentional capture in early visual areas. Because the sensory display characteristics were identical in both conditions, we were able to isolate the brain activity associated with exogenous attentional capture. The results show that spreading of attention leads to increased bottom-up exogenous capture and increased activity in visual area V3 but not in V2 and V1

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time

    Towards a framework for attention cueing in instructional animations: Guidelines for research and design

    Get PDF
    This paper examines the transferability of successful cueing approaches from text and static visualization research to animations. Theories of visual attention and learning as well as empirical evidence for the instructional effectiveness of attention cueing are reviewed and, based on Mayer’s theory of multimedia learning, a framework was developed for classifying three functions for cueing: (1) selection—cues guide attention to specific locations, (2) organization—cues emphasize structure, and (3) integration—cues explicate relations between and within elements. The framework was used to structure the discussion of studies on cueing in animations. It is concluded that attentional cues may facilitate the selection of information in animations and sometimes improve learning, whereas organizational and relational cueing requires more consideration on how to enhance understanding. Consequently, it is suggested to develop cues that work in animations rather than borrowing effective cues from static representations. Guidelines for future research on attention cueing in animations are presented

    A thalamic reticular networking model of consciousness

    Get PDF
    <p>Abstract</p> <p>[Background]</p> <p>It is reasonable to consider the thalamus a primary candidate for the location of consciousness, given that the thalamus has been referred to as the gateway of nearly all sensory inputs to the corresponding cortical areas. Interestingly, in an early stage of brain development, communicative innervations between the dorsal thalamus and telencephalon must pass through the ventral thalamus, the major derivative of which is the thalamic reticular nucleus (TRN). The TRN occupies a striking control position in the brain, sending inhibitory axons back to the thalamus, roughly to the same region where they receive afferents.</p> <p>[Hypotheses]</p> <p>The present study hypothesizes that the TRN plays a pivotal role in dynamic attention by controlling thalamocortical synchronization. The TRN is thus viewed as a functional networking filter to regulate conscious perception, which is possibly embedded in thalamocortical networks. Based on the anatomical structures and connections, modality-specific sectors of the TRN and the thalamus appear to be responsible for modality-specific perceptual representation. Furthermore, the coarsely overlapped topographic maps of the TRN appear to be associated with cross-modal or unitary conscious awareness. Throughout the latticework structure of the TRN, conscious perception could be accomplished and elaborated through accumulating intercommunicative processing across the first-order input signal and the higher-order signals from its functionally associated cortices. As the higher-order relay signals run cumulatively through the relevant thalamocortical loops, conscious awareness becomes more refined and sophisticated.</p> <p>[Conclusions]</p> <p>I propose that the thalamocortical integrative communication across first- and higher-order information circuits and repeated feedback looping may account for our conscious awareness. This TRN-modulation hypothesis for conscious awareness provides a comprehensive rationale regarding previously reported psychological phenomena and neurological symptoms such as blindsight, neglect, the priming effect, the threshold/duration problem, and TRN-impairment resembling coma. This hypothesis can be tested by neurosurgical investigations of thalamocortical loops via the TRN, while simultaneously evaluating the degree to which conscious perception depends on the severity of impairment in a TRN-modulated network.</p

    Dynamic Spatial Coding within the Dorsal Frontoparietal Network during a Visual Search Task

    Get PDF
    To what extent are the left and right visual hemifields spatially coded in the dorsal frontoparietal attention network? In many experiments with neglect patients, the left hemisphere shows a contralateral hemifield preference, whereas the right hemisphere represents both hemifields. This pattern of spatial coding is often used to explain the right-hemispheric dominance of lesions causing hemispatial neglect. However, pathophysiological mechanisms of hemispatial neglect are controversial because recent experiments on healthy subjects produced conflicting results regarding the spatial coding of visual hemifields. We used an fMRI paradigm that allowed us to distinguish two attentional subprocesses during a visual search task. Either within the left or right hemifield subjects first attended to stationary locations (spatial orienting) and then shifted their attentional focus to search for a target line. Dynamic changes in spatial coding of the left and right hemifields were observed within subregions of the dorsal front-parietal network: During stationary spatial orienting, we found the well-known spatial pattern described above, with a bilateral hemifield representation in the right hemisphere and a contralateral preference in the left hemisphere. However, during search, the right hemisphere had a contralateral preference and the left hemisphere equally represented both hemifields. This finding leads to novel perspectives regarding models of visuospatial attention and hemispatial neglect

    A competitive integration model of exogenous and endogenous eye movements

    Get PDF
    We present a model of the eye movement system in which the programming of an eye movement is the result of the competitive integration of information in the superior colliculi (SC). This brain area receives input from occipital cortex, the frontal eye fields, and the dorsolateral prefrontal cortex, on the basis of which it computes the location of the next saccadic target. Two critical assumptions in the model are that cortical inputs are not only excitatory, but can also inhibit saccades to specific locations, and that the SC continue to influence the trajectory of a saccade while it is being executed. With these assumptions, we account for many neurophysiological and behavioral findings from eye movement research. Interactions within the saccade map are shown to account for effects of distractors on saccadic reaction time (SRT) and saccade trajectory, including the global effect and oculomotor capture. In addition, the model accounts for express saccades, the gap effect, saccadic reaction times for antisaccades, and recorded responses from neurons in the SC and frontal eye fields in these tasks. © The Author(s) 2010

    Manipulable Objects Facilitate Cross-Modal Integration in Peripersonal Space

    Get PDF
    Previous studies have shown that tool use often modifies one's peripersonal space – i.e. the space directly surrounding our body. Given our profound experience with manipulable objects (e.g. a toothbrush, a comb or a teapot) in the present study we hypothesized that the observation of pictures representing manipulable objects would result in a remapping of peripersonal space as well. Subjects were required to report the location of vibrotactile stimuli delivered to the right hand, while ignoring visual distractors superimposed on pictures representing everyday objects. Pictures could represent objects that were of high manipulability (e.g. a cell phone), medium manipulability (e.g. a soap dispenser) and low manipulability (e.g. a computer screen). In the first experiment, when subjects attended to the action associated with the objects, a strong cross-modal congruency effect (CCE) was observed for pictures representing medium and high manipulability objects, reflected in faster reaction times if the vibrotactile stimulus and the visual distractor were in the same location, whereas no CCE was observed for low manipulability objects. This finding was replicated in a second experiment in which subjects attended to the visual properties of the objects. These findings suggest that the observation of manipulable objects facilitates cross-modal integration in peripersonal space
    corecore