22 research outputs found

    Learning shapes cortical dynamics to enhance integration of relevant sensory input

    Get PDF
    Adaptive sensory behavior is thought to depend on processing in recurrent cortical circuits, but how dynamics in these circuits shapes the integration and transmission of sensory information is not well understood. Here, we study neural coding in recurrently connected networks of neurons driven by sensory input. We show analytically how information available in the network output varies with the alignment between feedforward input and the integrating modes of the circuit dynamics. In light of this theory, we analyzed neural population activity in the visual cortex of mice that learned to discriminate visual features. We found that over learning, slow patterns of network dynamics realigned to better integrate input relevant to the discrimination task. This realignment of network dynamics could be explained by changes in excitatory-inhibitory connectivity among neurons tuned to relevant features. These results suggest that learning tunes the temporal dynamics of cortical circuits to optimally integrate relevant sensory input

    Mouse visual cortex contains a region of enhanced spatial resolution.

    Get PDF
    The representation of space in mouse visual cortex was thought to be relatively uniform. Here we reveal, using population receptive-field (pRF) mapping techniques, that mouse visual cortex contains a region in which pRFs are considerably smaller. This region, the "focea," represents a location in space in front of, and slightly above, the mouse. Using two-photon imaging we show that the smaller pRFs are due to lower scatter of receptive-fields at the focea and an over-representation of binocular regions of space. We show that receptive-fields of single-neurons in areas LM and AL are smaller at the focea and that mice have improved visual resolution in this region of space. Furthermore, freely moving mice make compensatory eye-movements to hold this region in front of them. Our results indicate that mice have spatial biases in their visual processing, a finding that has important implications for the use of the mouse model of vision

    Noise Correlations Have Little Influence on the Coding of Selective Attention in Area V1

    Get PDF
    Neurons in the visual primary cortex (area V1) do not only code simple features but also whether image elements are attended or not. These attentional signals are weaker than the feature-selective responses, and their reliability may therefore be limited by the noisiness of neuronal responses. Here we show that it is possible to decode the locus of attention on a single trial from the activity of a small population of neurons in area V1. Previous studies suggested that correlations between the activities of neurons that are part of a population limit the information gain, but here we report that the impact of these noise correlations depends on the relative position of the neurons' receptive fields. Correlations reduce the benefit of pooling neuronal responses evoked by the same object but actually enhance the advantage of pooling responses evoked by different objects. These opposing effects cancelled each other at the population level, so that the net effect of the noise correlations was negligible and attention could be decoded reliably. Our results suggest that noise correlations are caused by large-scale fluctuations in cortical excitability, which can be removed by a comparison of the response strengths evoked by different objects

    Multineuron representations of visual attention

    No full text
    Recently techniques have become available that allow for simultaneous recordings from multiple neurons in awake behaving higher primates. These recordings can be analyzed with multivariate statistical methods, such as Fisherā€™s linear discriminant method or support vector machines to determine how much information is represented in the activity of a population of neurons. We have applied these techniques to recordings from groups of neurons in primary visual cortex (area V1). Neurons in this area are not only tuned to basic stimulus features, but also reflect whether image elements are attended or not. These attentional signals are weaker than the feature-selective responses, and it might be suspected that the reliability of attentional signals in area V1 is limited by the noisiness of neuronal responses as well as by the tuning of the neurons to low-level features. Our surprising finding is that the locus of attention can be decoded on a single trial from the activity of a small population of neurons in area V1. One critical factor that determines how well information from multiple neurons is combined is the correlation of the response variability, or noise correlation, across neurons. It has been suggested that correlations between the activities of neurons that are part of a population limit the information gain, and we find that the correlations indeed reduce the benefit of pooling neuronal responses evoked by the same object, but they actually also enhance the advantage of pooling responses evoked by different objects. At the population level these opposing effects cancel each other, so that the net effect of the noise correlations is negligible and attention can be decoded reliably. We next investigated if it is possible to decode attention if we introduce large variations in luminance contrast, because luminance contrast has a strong effect on the activity of V1 neurons and therefore may disrupt the coding of attention. However, we find that some neurons in area V1 are modulated strongly by attention and others only by luminance contrast so that attention and contrast are represented by separable codes. These results demonstrate the advantages of multineuron representations of visual attention

    Two Distinct Types of Eye-Head Coupling in Freely Moving Mice.

    Get PDF
    Animals actively interact with their environment to gather sensory information. There is conflicting evidence about how mice use vision to sample their environment. During head restraint, mice make rapid eye movements coupled between the eyes, similar to conjugate saccadic eye movements in humans. However, when mice are free to move their heads, eye movements are more complex and often non-conjugate, with the eyes moving in opposite directions. We combined head and eye tracking in freely moving mice and found both observations are explained by two eye-head coupling types, associated with vestibular mechanisms. The first type comprised non-conjugate eye movements, which compensate for head tilt changes to maintain a similar visual field relative to the horizontal ground plane. The second type of eye movements was conjugate and coupled to head yaw rotation to produce a "saccade and fixate" gaze pattern. During head-initiated saccades, the eyes moved together in the head direction but during subsequent fixation moved in the opposite direction to the head to compensate for head rotation. This saccade and fixate pattern is similar to humans who use eye movements (with or without head movement) to rapidly shift gaze but in mice relies on combined head and eye movements. Both couplings were maintained during social interactions and visually guided object tracking. Even in head-restrained mice, eye movements were invariably associated with attempted head motion. Our results reveal that mice combine head and eye movements to sample their environment and highlight similarities and differences between eye movements in mice and humans

    Separable codes for attention and luminance contrast in the primary visual cortex

    No full text
    The visual system encodes the features of visual stimuli as well as their behavioral relevance. Stimuli with a high luminance contrast evoke more activity in the visual cortex than stimuli with a low contrast. At the same time, attended stimuli evoke more activity than nonattended stimuli. There is a debate about how visual features and attention jointly determine neuronal activity in the visual cortex. Some studies suggested that attention increases apparent contrast (Reynolds et al., 2000), others that attention amplifies responses by a constant factor (Williford and Maunsell, 2006), and yet others that attention and contrast have largely additive effects (Buracas and Boynton, 2007; Thiele et al., 2009). The influence of attention on contrast sensitivity differs between neurons, raising the possibility that attention and contrast could be coded conjointly in a population of neurons. Here we investigate this possibility by recording neuronal activity at multiple sites in the primary visual cortex of macaque monkeys using multielectrode recording techniques and support vector machines to decode attended stimuli as well as stimulus contrast. We find that many, but not all, V1 neurons are influenced by attention and that the effects of attention and contrast are additive on average. Stimulus contrast can be decoded from neuronal responses not strongly modulated by attention, whereas the attended stimulus can be decoded as the difference in activity of cells that are influenced by attention and cells that are not. The success of the approach suggests that visual attention and stimulus contrast are represented by largely separable code

    Texture Segregation Causes Early Figure Enhancement and Later Ground Suppression in Areas V1 and V4 of Visual Cortex

    Get PDF
    Segregation of images into figures and background is fundamental for visual perception. Cortical neurons respond more strongly to figural image elements than to background elements, but the mechanisms of figure-ground modulation (FGM) are only partially understood. It is unclear whether FGM in early and mid-level visual cortex is caused by an enhanced response to the figure, a suppressed response to the background, or both. We studied neuronal activity in areas V1 and V4 in monkeys performing a texture segregation task. We compared texture-defined figures with homogeneous textures and found an early enhancement of the figure representation, and a later suppression of the background. Across neurons, the strength of figure enhancement was independent of the strength of background suppression. We also examined activity in the different V1 layers. Both figure enhancement and ground suppression were strongest in superficial and deep layers and weaker in layer 4. The current-source density profiles suggested that figure enhancement was caused by stronger synaptic inputs in feedback-recipient layers 1, 2, and 5 and ground suppression by weaker inputs in these layers, suggesting an important role for feedback connections from higher level areas. These results provide new insights into the mechanisms for figure-ground organization

    The Segmentation of Proto-Objects in the Monkey Primary Visual Cortex.

    No full text
    During visual perception, the brain enhances the representations of image regions that belong to figures and suppresses those that belong to the background. Natural images contain many regions that initially appear to be part of a figure when analyzed locally (proto-objects) but are actually part of the background if the whole image is considered. These proto-grounds must be correctly assigned to the background to allow correct shape identification and guide behavior. To understand how the brain resolves this conflict between local and global processing, we recorded neuronal activity from the primary visual cortex (V1) of macaque monkeys while they discriminated between n/u shapes that have a central proto-ground region. We studied the fine-grained spatiotemporal profile of neural activity evoked by the n/u shape and found that neural representation of the object proceeded from a coarse-to-fine resolution. Approximately 100Ā ms after the stimulus onset, the representation of the proto-ground region was enhanced together with the rest of the n/u surface, but after āˆ¼115Ā ms, the proto-ground was suppressed back to the level of the background. Suppression of the proto-ground was only present in animals that had been trained to perform the shape-discrimination task, and it predicted the choice of the animal on a trial-by-trialĀ basis. Attention enhanced figure-ground modulation, but it had no effect on the strength of proto-ground suppression. The results indicate that theĀ accuracy of scene segmentation is sharpened by a suppressive process that resolves localĀ ambiguities by assigning proto-grounds to the background
    corecore