8,946 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Anchoring The Cognitive Map To The Visual World

    Get PDF
    To interact rapidly and effectively with the environment, the mammalian brain needs a representation of the spatial layout of the external world (or a “cognitive map”). A person might need to know where she is standing to find her way home, for instance, or might need to know where she is looking to reach for her out-of-sight keys. For many behaviors, however, simply possessing a map is not enough; in order for a map to be useful in a dynamic world, it must be anchored to stable environmental cues. The goal of the present research is to address this spatial anchoring problem in two different domains: navigation and vision. In the first part of the thesis, which comprises Chapters 1-3, we examine how navigators use perceptual information to re-anchor their cognitive map after becoming lost, a process known as spatial reorientation. Using a novel behavioral paradigm with rodents, in Chapter 2 we show that the cognitive map is reoriented by dissociable inputs for identifying where one is and recovering which way one is facing. The findings presented in Chapter 2 also highlight the importance of environmental boundaries, such as the walls of a room, for anchoring the cognitive map. We thus predicted that there might exist a brain region that is selectively involved in boundary perception during navigation. Accordingly, in Chapter 3, we combine transcranial magnetic stimulation and virtual-reality navigation to reveal the existence of such a boundary perception region in humans. In the second part of this thesis, Chapter 4, we explore whether the same mechanisms that support the cognitive map of navigational space also mediate a map of visual space (i.e., where one is looking). Using functional magnetic resonance imaging and eye tracking, we show that human entorhinal cortex supports a map-like representation of visual space that obeys the same principles of boundary-anchoring previously observed in rodent maps of navigational space. Together, this research elucidates how mental maps are anchored to the world, thus allowing the mammalian brain to form durable spatial representations across body and eye movements

    Visual attention deficits in schizophrenia can arise from inhibitory dysfunction in thalamus or cortex

    Full text link
    Schizophrenia is associated with diverse cognitive deficits, including disorders of attention-related oculomotor behavior. At the structural level, schizophrenia is associated with abnormal inhibitory control in the circuit linking cortex and thalamus. We developed a spiking neural network model that demonstrates how dysfunctional inhibition can degrade attentive gaze control. Our model revealed that perturbations of two functionally distinct classes of cortical inhibitory neurons, or of the inhibitory thalamic reticular nucleus, disrupted processing vital for sustained attention to a stimulus, leading to distractibility. Because perturbation at each circuit node led to comparable but qualitatively distinct disruptions in attentive tracking or fixation, our findings support the search for new eye movement metrics that may index distinct underlying neural defects. Moreover, because the cortico-thalamic circuit is a common motif across sensory, association, and motor systems, the model and extensions can be broadly applied to study normal function and the neural bases of other cognitive deficits in schizophrenia.R01 MH057414 - NIMH NIH HHS; R01 MH101209 - NIMH NIH HHS; R01 NS024760 - NINDS NIH HHSPublished versio

    The Role of the Dorsal Premotor and Superior Parietal Cortices in Decoupled Visuomotor Transformations

    Get PDF
    In order to successfully interact with objects located within our environment, the brain must be capable of combining visual information with the appropriate felt limb position (i.e. proprioception) in order compute an appropriate coordinated muscle plan for accurate motor control. Eye-hand coordination is essential to our independence as a species and relies heavily on the reciprocally-connected regions of the parieto-frontal reach network. The dorsal premotor cortex (PMd) and the superior parietal lobule (SPL) remain prime candidates within this network for controlling the transformations required during visually-guided reaching movements. Our brains are primed to reach directly towards a viewed object, a situation that has been termed a “standard” or coupled reach. Such direct eye-hand coordination is common across species and is crucial for basic survival. Humans, however, have developed the capacity for tool-use and thus have learned to interact indirectly with an object. In such “non-standard” or decoupled situations, the directions of gaze and arm movement have been spatially decoupled and rely on both the implementation of a cognitive rule and on online feedback of the decoupled limb. The studies included within this dissertation were designed to further characterize the role of PMd and SPL during situations in which when a reach requires a spatial transformation between the actions of the eyes and the hand. More specifically, we were interested in examining whether regions within PMd (PMdr, PMdc) and SPL (PEc, MIP) responded differently during coupled versus decoupled visuomotor transformations. To address the relative contribution of these various cortical regions during decoupled reaching movements, we trained two female rhesus macaques on both coupled and decoupled visually-guided reaching tasks. We recorded the neural activity (single units and local field potentials) within each region while the animals performed each condition. We found that two separate networks emerged each contributing in a distinct ways to the performance of coupled versus decoupled eye-hand reaches. While PMdr and PEc showed enhanced activity during decoupled reach conditions, PMdc and MIP were more enhanced during coupled reaches. Taken together, these data presented here provide further evidence for the existence of alternate task-dependent neural pathways for visuomotor integration

    Probabilistic modeling of eye movement data during conjunction search via feature-based attention

    Get PDF
    Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells

    Spatial Transformations in Frontal Cortex During Memory-Guided Head-Unrestrained Gaze Shifts

    Get PDF
    We constantly orient our line of sight (i.e., gaze) to external objects in our environment. One of the central questions in sensorimotor neuroscience concerns how visual input (registered on retina) is transformed into appropriate signals that drive gaze shift, comprised of coordinated movement of the eyes and the head. In this dissertation I investigated the function of a node in the frontal cortex, known as the frontal eye field (FEF) by investigating the spatial transformations that occur within this structure. FEF is implicated as a key node in gaze control and part of the working memory network. I recorded the activity of single FEF neurons in head-unrestrained monkeys as they performed a simple memory-guided gaze task which required delayed gaze shifts (by a few hundred milliseconds) towards remembered visual stimuli. By utilizing an elaborate analysis method which fits spatial models to neuronal response fields, I identified the spatial code embedded in neuronal activity related to vision (visual response), memory (delay response), and gaze shift (movement response). First (Chapter 2), spatial transformations that occur within the FEF were identified by comparing spatial codes in visual and movement responses. I showed eye-centered dominance in both neuronal responses (and excluded head- and space-centered coding); however, whereas the visual response encoded target position, the movement response encoded the position of the imminent gaze shift (and not its independent eye and head components), and this was observed even within single neurons. In Chapter 3, I characterized the time-course for this target-to-gaze transition by identifying the spatial code during the intervening delay period. The results from this study highlighted two major transitions within the FEF: a gradual transition during the visual-delay-movement extent of delay-responsive neurons, followed by a discrete transition between delay-responsive neurons and pre-saccadic neurons that exclusively fire around the time of gaze movement. These results show that the FEF is involved in memory-based transformations in gaze control; but instead of encoding specific movement parameters (eye and head) it encodes the desired gaze endpoint. The representations of the movement goal are subject to noise and this noise accumulates at different stages related to different mechanisms

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals

    Dorsal Premotor Neurons Encode the Relative Position of the Hand, Eye, and Goal during Reach Planning

    Get PDF
    When reaching to grasp an object, we often move our arm and orient our gaze together. How are these movements coordinated? To investigate this question, we studied neuronal activity in the dorsal premotor area (PMd) and the medial intraparietal area (area MIP) of two monkeys while systematically varying the starting position of the hand and eye during reaching. PMd neurons encoded the relative position of the target, hand, and eye. MIP neurons encoded target location with respect to the eye only. These results indicate that whereas MIP encodes target locations in an eyecentered reference frame, PMd uses a relative position code that specifies the differences in locations between all three variables. Such a relative position code may play an important role in coordinating hand and eye movements by computing their relative position

    Social Saliency: Visual Psychophysics and Single-Neuron Recordings in Humans

    Get PDF
    My thesis studies how people pay attention to other people and the environment. How does the brain figure out what is important and what are the neural mechanisms underlying attention? What is special about salient social cues compared to salient non-social cues? In Chapter I, I review social cues that attract attention, with an emphasis on the neurobiology of these social cues. I also review neurological and psychiatric links: the relationship between saliency, the amygdala and autism. The first empirical chapter then begins by noting that people constantly move in the environment. In Chapter II, I study the spatial cues that attract attention during locomotion using a cued speeded discrimination task. I found that when the motion was expansive, attention was attracted towards the singular point of the optic flow (the focus of expansion, FOE) in a sustained fashion. The more ecologically valid the motion features became (e.g., temporal expansion of each object, spatial depth structure implied by distribution of the size of the objects), the stronger the attentional effects. However, compared to inanimate objects and cues, people preferentially attend to animals and faces, a process in which the amygdala is thought to play an important role. To directly compare social cues and non-social cues in the same experiment and investigate the neural structures processing social cues, in Chapter III, I employ a change detection task and test four rare patients with bilateral amygdala lesions. All four amygdala patients showed a normal pattern of reliably faster and more accurate detection of animate stimuli, suggesting that advantageous processing of social cues can be preserved even without the amygdala, a key structure of the “social brain”. People not only attend to faces, but also pay attention to others’ facial emotions and analyze faces in great detail. Humans have a dedicated system for processing faces and the amygdala has long been associated with a key role in recognizing facial emotions. In Chapter IV, I study the neural mechanisms of emotion perception and find that single neurons in the human amygdala are selective for subjective judgment of others’ emotions. Lastly, people typically pay special attention to faces and people, but people with autism spectrum disorders (ASD) might not. To further study social attention and explore possible deficits of social attention in autism, in Chapter V, I employ a visual search task and show that people with ASD have reduced attention, especially social attention, to target-congruent objects in the search array. This deficit cannot be explained by low-level visual properties of the stimuli and is independent of the amygdala, but it is dependent on task demands. Overall, through visual psychophysics with concurrent eye-tracking, my thesis found and analyzed socially salient cues and compared social vs. non-social cues and healthy vs. clinical populations. Neural mechanisms underlying social saliency were elucidated through electrophysiology and lesion studies. I finally propose further research questions based on the findings in my thesis and introduce my follow-up studies and preliminary results beyond the scope of this thesis in the very last section, Future Directions

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process
    corecore