165 research outputs found

    Visuo-auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Visual, tactile and auditory information is processed from the periphery to the cortical level through separate channels that target primary sensory cortices, from which it is further distributed to functionally specialized areas. Multisensory integration is classically assigned to higher hierarchical cortical areas, but there is growing electrophysiological evidence in man and monkey of multimodal interactions in areas thought to be unimodal, interactions that can occur at very short latencies. Such fast timing of multisensory interactions rules out the possibility of an origin in the polymodal areas mediated through back projections, but is rather in favor of heteromodal connections such as the direct projections observed in the monkey, from auditory areas (including the primary auditory cortex AI) directly to the primary visual cortex V1. Based on the existence of such AI to V1 projections, we looked for modulation of neuronal visual responses in V1 by an auditory stimulus in the awake behaving monkey.</p> <p>Results</p> <p>Behavioral or electrophysiological data were obtained from two behaving monkeys. One monkey was trained to maintain a passive central fixation while a peripheral visual (V) or visuo-auditory (AV) stimulus was presented. From a population of 45 V1 neurons, there was no difference in the mean latencies or strength of visual responses when comparing V and AV conditions. In a second active task, the monkey was required to orient his gaze toward the visual or visuo-auditory stimulus. From a population of 49 cells recorded during this saccadic task, we observed a significant reduction in response latencies in the visuo-auditory condition compared to the visual condition (mean 61.0 vs. 64.5 ms) only when the visual stimulus was at midlevel contrast. No effect was observed at high contrast.</p> <p>Conclusion</p> <p>Our data show that single neurons from a primary sensory cortex such as V1 can integrate sensory information of a different modality, a result that argues against a strict hierarchical model of multisensory integration. Multisensory interaction in V1 is, in our experiment, expressed by a significant reduction in visual response latencies specifically in suboptimal conditions and depending on the task demand. This suggests that neuronal mechanisms of multisensory integration are specific and adapted to the perceptual features of behavior.</p

    A robotics approach for interpreting the gaze-related modulation of the activity of premotor neurons during reaching

    Get PDF
    International audienceThis paper deals with the modeling of the activity of premotor neurons associated with the execution of a visually guided reaching movement in primates. We address this question from a robotics point of view, by considering a simplified kinematic model of the head, eye and arm joints. By using the formalism of visual servoing, we show that the hand controller depends on the direction of the head and the eye, as soon as the hand-target difference vector is expressed in eye-centered coordinates. Based on this result, we propose a new interpretation of previous electrophysiological recordings in monkey, showing the existence of a gaze-related modulation of the activity of premotor neurons during reaching. This approach sheds a new light on this phenomenon which, so far, is not clearly understood

    Feature detection using spikes: the greedy approach

    Full text link
    A goal of low-level neural processes is to build an efficient code extracting the relevant information from the sensory input. It is believed that this is implemented in cortical areas by elementary inferential computations dynamically extracting the most likely parameters corresponding to the sensory signal. We explore here a neuro-mimetic feed-forward model of the primary visual area (VI) solving this problem in the case where the signal may be described by a robust linear generative model. This model uses an over-complete dictionary of primitives which provides a distributed probabilistic representation of input features. Relying on an efficiency criterion, we derive an algorithm as an approximate solution which uses incremental greedy inference processes. This algorithm is similar to 'Matching Pursuit' and mimics the parallel architecture of neural computations. We propose here a simple implementation using a network of spiking integrate-and-fire neurons which communicate using lateral interactions. Numerical simulations show that this Sparse Spike Coding strategy provides an efficient model for representing visual data from a set of natural images. Even though it is simplistic, this transformation of spatial data into a spatio-temporal pattern of binary events provides an accurate description of some complex neural patterns observed in the spiking activity of biological neural networks.Comment: This work links Matching Pursuit with bayesian inference by providing the underlying hypotheses (linear model, uniform prior, gaussian noise model). A parallel with the parallel and event-based nature of neural computations is explored and we show application to modelling Primary Visual Cortex / image processsing. http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tau

    Activity of Pursuit-Related Neurons in Medial Superior Temporal Area (MST) during Static Roll-Tilt

    Get PDF
    Recent studies have shown that rhesus macaques can perceive visual motion direction in earth-centered coordinates as accurately as humans. We tested whether coordinate frames representing smooth pursuit and/or visual motion signals in medial superior temporal area (MST) are earth centered to better understand its role in coordinating smooth pursuit. In 2 Japanese macaques, we compared preferred directions (re monkeys' head–trunk axis) of pursuit and/or visual motion responses of MSTd neurons while upright and during static whole-body roll-tilt. In the majority (41/51 = 80%) of neurons tested, preferred directions of pursuit and/or visual motion responses were not significantly different while upright and during 40° static roll-tilt. Preferred directions of the remaining 20% of neurons (n = 10) were shifted beyond the range expected from ocular counter-rolling; the maximum shift was 14°, and the mean shift was 12°. These shifts, however, were still less than half of the expected shift if MST signals are coded in the earth-centered coordinates. Virtually, all tested neurons (44/46 = 96%) failed to exhibit a significant difference between resting discharge rate while upright and during static roll-tilt while fixating a stationary spot. These results suggest that smooth pursuit and/or visual motion signals of MST neurons are not coded in the earth-centered coordinates; our results favor the head- and/or trunk-centered coordinates

    Neuronal activity in medial superior temporal area (MST) during memory-based smooth pursuit eye movements in monkeys

    Get PDF
    We examined recently neuronal substrates for predictive pursuit using a memory-based smooth pursuit task that distinguishes the discharge related to memory of visual motion-direction from that related to movement preparation. We found that the supplementary eye fields (SEF) contain separate signals coding memory and assessment of visual motion-direction, decision not-to-pursue, and preparation for pursuit. Since medial superior temporal area (MST) is essential for visual motion processing and projects to SEF, we examined whether MST carried similar signals. We analyzed the discharge of 108 MSTd neurons responding to visual motion stimuli. The majority (69/108 = 64%) were also modulated during smooth pursuit. However, in nearly all (104/108 = 96%) of the MSTd neurons tested, there was no significant discharge modulation during the delay periods that required memory of visual motion-direction or preparation for smooth pursuit or not-to-pursue. Only 4 neurons of the 108 (4%) exhibited significantly higher discharge rates during the delay periods; however, their responses were non-directional and not instruction specific. Representative signals in the MSTd clearly differed from those in the SEF during memory-based smooth pursuit. MSTd neurons are unlikely to provide signals for memory of visual motion-direction or preparation for smooth pursuit eye movements

    Noise Correlations Have Little Influence on the Coding of Selective Attention in Area V1

    Get PDF
    Neurons in the visual primary cortex (area V1) do not only code simple features but also whether image elements are attended or not. These attentional signals are weaker than the feature-selective responses, and their reliability may therefore be limited by the noisiness of neuronal responses. Here we show that it is possible to decode the locus of attention on a single trial from the activity of a small population of neurons in area V1. Previous studies suggested that correlations between the activities of neurons that are part of a population limit the information gain, but here we report that the impact of these noise correlations depends on the relative position of the neurons' receptive fields. Correlations reduce the benefit of pooling neuronal responses evoked by the same object but actually enhance the advantage of pooling responses evoked by different objects. These opposing effects cancelled each other at the population level, so that the net effect of the noise correlations was negligible and attention could be decoded reliably. Our results suggest that noise correlations are caused by large-scale fluctuations in cortical excitability, which can be removed by a comparison of the response strengths evoked by different objects

    Fast Coding of Orientation in Primary Visual Cortex

    Get PDF
    Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made

    Incremental grouping of image elements in vision

    Get PDF
    One important task for the visual system is to group image elements that belong to an object and to segregate them from other objects and the background. We here present an incremental grouping theory (IGT) that addresses the role of object-based attention in perceptual grouping at a psychological level and, at the same time, outlines the mechanisms for grouping at the neurophysiological level. The IGT proposes that there are two processes for perceptual grouping. The first process is base grouping and relies on neurons that are tuned to feature conjunctions. Base grouping is fast and occurs in parallel across the visual scene, but not all possible feature conjunctions can be coded as base groupings. If there are no neurons tuned to the relevant feature conjunctions, a second process called incremental grouping comes into play. Incremental grouping is a time-consuming and capacity-limited process that requires the gradual spread of enhanced neuronal activity across the representation of an object in the visual cortex. The spread of enhanced neuronal activity corresponds to the labeling of image elements with object-based attention
    corecore