25 research outputs found

    Correlation Maps Allow Neuronal Electrical Properties to be Predicted from Single-cell Gene Expression Profiles in Rat Neocortex

    Get PDF
    The computational power of the neocortex arises from interactions of multiple neurons, which display a wide range of electrical properties. The gene expression profiles underlying this phenotypic diversity are unknown. To explore this relationship, we combined whole-cell electrical recordings with single-cell multiplex RT-PCR of rat (p13-16) neocortical neurons to obtain cDNA libraries of 26 ion channels (including voltage activated potassium channels, Kv1.1/2/4/6, Kvβ1/2, Kv2.1/2, Kv3.1/2/3/4, Kv4.2/3; sodium/potassium permeable hyperpolarization activated channels, HCN1/2/3/4; the calcium activated potassium channel, SK2; voltage activated calcium channels, Caα1A/B/G/I, Caβ1/3/4), three calcium binding proteins (calbindin, parvalbumin and calretinin) and GAPDH. We found a previously unreported clustering of ion channel genes around the three calcium-binding proteins. We further determined that cells similar in their expression patterns were also similar in their electrical properties. Subsequent regression modeling with statistical resampling yielded a set of coefficients that reliably predicted electrical properties from the expression profile of individual neurons. This is the first report of a consistent relationship between the co-expression of a large profile of ion channel and calcium binding protein genes and the electrical phenotype of individual neocortical neuron

    Continuous Attractors with Morphed/Correlated Maps

    Get PDF
    Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task

    Neural network model of the primary visual cortex: From functional architecture to lateral connectivity and back

    Get PDF
    The role of intrinsic cortical dynamics is a debatable issue. A recent optical imaging study (Kenet et al., 2003) found that activity patterns similar to orientation maps (OMs), emerge in the primary visual cortex (V1) even in the absence of sensory input, suggesting an intrinsic mechanism of OM activation. To better understand these results and shed light on the intrinsic V1 processing, we suggest a neural network model in which OMs are encoded by the intrinsic lateral connections. The proposed connectivity pattern depends on the preferred orientation and, unlike previous models, on the degree of orientation selectivity of the interconnected neurons. We prove that the network has a ring attractor composed of an approximated version of the OMs. Consequently, OMs emerge spontaneously when the network is presented with an unstructured noisy input. Simulations show that the model can be applied to experimental data and generate realistic OMs. We study a variation of the model with spatially restricted connections, and show that it gives rise to states composed of several OMs. We hypothesize that these states can represent local properties of the visual scene

    An algorithm for the analysis of temporally structured multidimensional measurements

    Get PDF
    Analysis of multichannel recordings acquired with contemporary imaging or electrophysiological methods in neuroscience is often difficult due to the high dimensionality of the data and the low signal-to-noise ratio. We developed a method that addresses both problems by utilizing prior information about the temporal structure of the signal and the noise. This information is expressed mathematically in terms of sets of correlation matrices, a versatile approach that allows the treatment of a large class of signal and noise sources, including non-stationary sources or correlated signal and noise sources. We present a mathematical analysis of the algorithm, as well as application to an artificial dataset, and show that the algorithm is tolerant to inaccurate assumptions about the temporal structure of the data. We suggest that the algorithm, which we name temporally structured component analysis, can be highly beneficial to various multichannel measurement techniques, such as fMRI or optical imaging

    Dynamics of memory representations in networks with novelty-facilitated synaptic plasticity

    Get PDF
    The ability to associate some stimuli while differentiating between others is an essential characteristic of biological memory. Theoretical models identify memories as attractors of neural network activity, with learning based on Hebb-like synaptic modifications. Our analysis shows that when network inputs are correlated, this mechanism results in overassociations, even up to several memories ‘‘merging’ ’ into one. To counteract this tendency, we introduce a learning mechanism that involves novelty-facilitated modifications, accentuating synaptic changes proportionally to the difference between network input and stored memories. This mechanism introduces a dependency of synaptic modifications on previously acquired memories, enabling a wide spectrum of memory associations, ranging from absolute discrimination to complete merging. The model predicts that memory representations should be sensitive to learning order, consistent with recent psychophysical studies of face recognition and electrophysiological experiments on hippocampal place cells. The proposed mechanism is compatible with a recent biological model of novelty-facilitated learning in hippocampal circuitry
    corecore