2,200 research outputs found

    Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity

    Get PDF
    Neurons in the hippocampus and adjacent brain areas show a large diversity in their tuning to location and head direction, and the underlying circuit mechanisms are not yet resolved. In particular, it is unclear why certain cell types are selective to one spatial variable, but invariant to another. For example, place cells are typically invariant to head direction. We propose that all observed spatial tuning patterns – in both their selectivity and their invariance – arise from the same mechanism: Excitatory and inhibitory synaptic plasticity driven by the spatial tuning statistics of synaptic inputs. Using simulations and a mathematical analysis, we show that combined excitatory and inhibitory plasticity can lead to localized, grid-like or invariant activity. Combinations of different input statistics along different spatial dimensions reproduce all major spatial tuning patterns observed in rodents. Our proposed model is robust to changes in parameters, develops patterns on behavioral timescales and makes distinctive experimental predictions.BMBF, 01GQ1201, Lernen und Gedächtnis in balancierten Systeme

    The representation of auditory space in the auditory cortex of anesthetized and awake mice

    Get PDF
    The ability to localize sounds is of profound importance for animals, as it enables them to detect prey and predators. In the horizontal plane, sound localization is achieved by means of binaural cues, which are processed and interpreted by the ascending auditory pathway. The auditory cortex (AC), as its primary cortical relay station, has traditionally been thought to broadly and stationary represent the contralateral hemifield of auditory space. Because prior research on space representation in the mammalian AC heavily relied on anesthetized preparations, the manner in which anesthesia influences this representation has remained elusive. Performing chronic two-photon-calcium imaging in the AC of awake and anesthetized mice, I characterized the effects of anesthesia on auditory space representation. First, anesthesia was found to impair the spatial sensitivity of neurons. Second, anesthesia constantly suppressed the representation of frontal locations biasing spatial tuning to the contralateral side. In both conditions (awake and anesthetized), the population of neurons endured a stable representation of auditory space, while single-cell spatial tuning was found to be extremely dynamic. Importantly, under both conditions no evidence for a topographical map of auditory space was found. This study is the first to chronically probe spatial tuning in the AC and likewise the first to directly assess effects of anesthesia on single-cell spatial tuning and the population code emphasizing the need for a shift towards awake preparations

    Inner retinal inhibition shapes the receptive field of retinal ganglion cells in primate

    Get PDF
    The centre-surround organisation of receptive fields is a feature of most retinal ganglion cells (RGCs) and is critical for spatial discrimination and contrast detection. Although lateral inhibitory processes are known to be important in generating the receptive field surround, the contribution of each of the two synaptic layers in the primate retina remains unclear. Here we studied the spatial organisation of excitatory and inhibitory synaptic inputs onto ON and OFF ganglion cells in the primate retina. All RGCs showed an increase in excitation in response to stimulus of preferred polarity. Inhibition onto RGCs comprised two types of responses to preferred polarity: some RGCs showed an increase in inhibition whilst others showed removal of tonic inhibition. Excitatory inputs were strongly spatially tuned but inhibitory inputs showed more variable organisation: in some neurons they were as strongly tuned as excitation, and in others inhibitory inputs showed no spatial tuning. We targeted one source of inner retinal inhibition by functionally ablating spiking amacrine cells with bath application of tetrodotoxin (TTX). TTX significantly reduced the spatial tuning of excitatory inputs. In addition, TTX reduced inhibition onto those RGCs where a stimulus of preferred polarity increased inhibition. Reconstruction of the spatial tuning properties by somatic injection of excitatory and inhibitory synaptic conductances verified that TTX-mediated inhibition onto bipolar cells increases the strength of the surround in RGC spiking output. These results indicate that in the primate retina inhibitory mechanisms in the inner plexiform layer sharpen the spatial tuning of ganglion cells. © 2013 The Physiological Society

    Cortical transformation of spatial processing for solving the cocktail party problem: a computational model(1,2,3).

    Get PDF
    In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.R01 DC000100 - NIDCD NIH HHSPublished versio

    A physiologically inspired model for solving the cocktail party problem.

    Get PDF
    At a cocktail party, we can broadly monitor the entire acoustic scene to detect important cues (e.g., our names being called, or the fire alarm going off), or selectively listen to a target sound source (e.g., a conversation partner). It has recently been observed that individual neurons in the avian field L (analog to the mammalian auditory cortex) can display broad spatial tuning to single targets and selective tuning to a target embedded in spatially distributed sound mixtures. Here, we describe a model inspired by these experimental observations and apply it to process mixtures of human speech sentences. This processing is realized in the neural spiking domain. It converts binaural acoustic inputs into cortical spike trains using a multi-stage model composed of a cochlear filter-bank, a midbrain spatial-localization network, and a cortical network. The output spike trains of the cortical network are then converted back into an acoustic waveform, using a stimulus reconstruction technique. The intelligibility of the reconstructed output is quantified using an objective measure of speech intelligibility. We apply the algorithm to single and multi-talker speech to demonstrate that the physiologically inspired algorithm is able to achieve intelligible reconstruction of an "attended" target sentence embedded in two other non-attended masker sentences. The algorithm is also robust to masker level and displays performance trends comparable to humans. The ideas from this work may help improve the performance of hearing assistive devices (e.g., hearing aids and cochlear implants), speech-recognition technology, and computational algorithms for processing natural scenes cluttered with spatially distributed acoustic objects.R01 DC000100 - NIDCD NIH HHSPublished versio

    Separate representations of target and timing cue locations in the supplementary eye fields

    Get PDF
    When different stimuli indicate where and when to make an eye movement, the brain areas involved in oculomotor control must selectively plan an eye movement to the stimulus that encodes the target position and also encode the information available from the timing cue. This could pose a challenge to the oculomotor system since the representation of the timing stimulus location in one brain area might be interpreted by downstream neurons as a competing motor plan. Evidence from diverse sources has suggested that the supplementary eye fields (SEF) play an important role in behavioral timing, so we recorded single-unit activity from SEF to characterize how target and timing cues are encoded in this region. Two monkeys performed a variant of the memory-guided saccade task, in which a timing stimulus was presented at a randomly chosen eccentric location. Many spatially tuned SEF neurons encoded only the location of the target and not the timing stimulus, whereas several other SEF neurons encoded the location of the timing stimulus and not the target. The SEF population therefore encoded the location of each stimulus with largely distinct neuronal subpopulations. For comparison, we recorded a small population of lateral intraparietal (LIP) neurons in the same task. We found that most LIP neurons that encoded the location of the target also encoded the location of the timing stimulus after its presentation, but selectively encoded the intended eye movement plan in advance of saccade initiation. These results suggest that SEF, by conditionally encoding the location of instructional stimuli depending on their meaning, can help identify which movement plan represented in other oculomotor structures, such as LIP, should be selected for the next eye movement

    Dynamic Spatial Tuning Patterns of Shoulder Muscles with Volunteers in a Driving Posture

    Get PDF
    Computational human body models (HBMs) of drivers for pre-crash simulations need active shoulder muscle control, and volunteer data are lacking. The goal of this paper was to build shoulder muscle dynamic spatial tuning patterns, with a secondary focus to present shoulder kinematic evaluation data. 8M and 9F volunteers sat in a driver posture, with their torso restrained, and were exposed to upper arm dynamic perturbations in eight directions perpendicular to the humerus. A dropping 8-kg weight connected to the elbow through pulleys applied the loads; the exact timing and direction were unknown. Activity in 11 shoulder muscles was measured using surface electrodes, and upper arm kinematics were measured with three cameras. We found directionally specific muscle activity and presented dynamic spatial tuning patterns for each muscle separated by sex. The preferred directions, i.e. the vector mean of a spatial tuning pattern, were similar between males and females, with the largest difference of 31° in the pectoralis major muscle. Males and females had similar elbow displacements. The maxima of elbow displacements in the loading plane for males was 189 ± 36 mm during flexion loading, and for females, it was 196 ± 36 mm during adduction loading. The data presented here can be used to design shoulder muscle controllers for HBMs and evaluate the performance of shoulder models

    Egocentric and allocentric representations in auditory cortex

    Get PDF
    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position
    corecore