35 research outputs found

    Developmental and evolutionary constraints on olfactory circuit selection

    Get PDF
    SignificanceIn this work, we explore the hypothesis that biological neural networks optimize their architecture, through evolution, for learning. We study early olfactory circuits of mammals and insects, which have relatively similar structure but a huge diversity in size. We approximate these circuits as three-layer networks and estimate, analytically, the scaling of the optimal hidden-layer size with input-layer size. We find that both longevity and information in the genome constrain the hidden-layer size, so a range of allometric scalings is possible. However, the experimentally observed allometric scalings in mammals and insects are consistent with biologically plausible values. This analysis should pave the way for a deeper understanding of both biological and artificial networks

    Evolution of neural activity in circuits bridging sensory and abstract knowledge

    Get PDF
    The ability to associate sensory stimuli with abstract classes is critical for survival. How are these associations implemented in brain circuits? And what governs how neural activity evolves during abstract knowledge acquisition? To investigate these questions, we consider a circuit model that learns to map sensory input to abstract classes via gradient-descent synaptic plasticity. We focus on typical neuroscience tasks (simple, and context-dependent, categorization), and study how both synaptic connectivity and neural activity evolve during learning. To make contact with the current generation of experiments, we analyze activity via standard measures such as selectivity, correlations, and tuning symmetry. We find that the model is able to recapitulate experimental observations, including seemingly disparate ones. We determine how, in the model, the behaviour of these measures depends on details of the circuit and the task. These dependencies make experimentally testable predictions about the circuitry supporting abstract knowledge acquisition in the brain

    On the Stability and Scalability of Node Perturbation Learning

    Get PDF
    To survive, animals must adapt synaptic weights based on external stimuli and rewards. And they must do so using local, biologically plausible, learning rules – a highly nontrivial constraint. One possible approach is to perturb neural activity (or use intrinsic, ongoing noise to perturb it), determine whether performance increases or decreases, and use that information to adjust the weights. This algorithm – known as node perturbation – has been shown to work on simple problems, but little is known about either its stability or its scalability with respect to network size. We investigate these issues both analytically, in deep linear networks, and numerically, in deep nonlinear ones. We show analytically that in deep linear networks with one hidden layer, both learning time and performance depend very weakly on hidden layer size. However, unlike stochastic gradient descent, when there is model mismatch between the student and teacher networks, node perturbation is always unstable. The instability is triggered by weight diffusion, which eventually leads to very large weights. This instability can be suppressed by weight normalization, at the cost of bias in the learning rule. We confirm numerically that a similar instability, and to a lesser extent scalability, exist in deep nonlinear networks trained on both a motor control task and image classification tasks. Our study highlights the limitations and potential of node perturbation as a biologically plausible learning rule in the brain

    シナプスのダイナミクスと学習 : いかにして可塑性の生物学的メカニズムは、神経情報処理を可能とする効率的な学習則を実現するか。

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学客員教授 深井 朋樹, 東京大学教授 能瀬 聡直, 東京大学教授 岡田 真人, 東京大学准教授 久恒 辰博, 東京大学講師 牧野 泰才University of Tokyo(東京大学

    Rapid Bayesian learning in the mammalian olfactory system

    Get PDF
    How can rodents make sense of the olfactory environment without supervision? Here, the authors formulate olfactory learning as an integrated Bayesian inference problem, then derive a set of synaptic plasticity rules and neural dynamics that enables near-optimal learning of odor identification

    Interplay between Short- and Long-Term Plasticity in Cell-Assembly Formation

    No full text
    <div><p>Various hippocampal and neocortical synapses of mammalian brain show both short-term plasticity and long-term plasticity, which are considered to underlie learning and memory by the brain. According to Hebb’s postulate, synaptic plasticity encodes memory traces of past experiences into cell assemblies in cortical circuits. However, it remains unclear how the various forms of long-term and short-term synaptic plasticity cooperatively create and reorganize such cell assemblies. Here, we investigate the mechanism in which the three forms of synaptic plasticity known in cortical circuits, i.e., spike-timing-dependent plasticity (STDP), short-term depression (STD) and homeostatic plasticity, cooperatively generate, retain and reorganize cell assemblies in a recurrent neuronal network model. We show that multiple cell assemblies generated by external stimuli can survive noisy spontaneous network activity for an adequate range of the strength of STD. Furthermore, our model predicts that a symmetric temporal window of STDP, such as observed in dopaminergic modulations on hippocampal neurons, is crucial for the retention and integration of multiple cell assemblies. These results may have implications for the understanding of cortical memory processes.</p></div

    Mixed Signal Learning by Spike Correlation Propagation in Feedback Inhibitory Circuits

    No full text
    <div><p>The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory.</p></div

    Retention of cell assemblies by weak STD.

    No full text
    <p>(<b>A</b>) A first external input activates 20% of excitatory neurons (ca1, blue shaded area), and then a second input successively activates other 20% of excitatory neurons (ca2, green area). Neurons not stimulated by the external inputs are regarded as background (bg). (<b>B</b>) Time evolution of relative synaptic weight <i>w<sub>2</sub></i>. Blue shade indicates the interval of the first stimulus, and the green shade denotes the second one. We defined the retention time of a cell assembly as the time at which <i>w<sub>2</sub></i> crosses threshold from above (<i>w<sub>2</sub></i> = 0.015: dotted line). (<b>C</b>) Time evolution of the average synaptic weight for three values of <i>u</i><sub>sd</sub>. The weights were separately averaged over synapses within and between different cell assemblies and background neurons. In the left and middle panels, black lines for bg-to-bg connections are hidden behind purple lines. (<b>D</b>) Raster plots of spiking activity corresponding to the three cases shown in <b>C</b>. Color codes are the same as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0101535#pone-0101535-g002" target="_blank">Figure 2<b>C</b></a>. First 500 neurons belong to the first assembly and the second 500 neurons to the second assembly. (<b>E</b>) Synaptic weight matrices of excitatory connections are shown for the above three cases. (<b>F</b>), (<b>G</b>) The relative synaptic weight <i>w<sub>2</sub></i> and the retention time of ca2 are shown as functions of the release probability <i>u<sub>sd</sub></i>. (<b>H</b>) Relationship between the input duration to ca1 and the relative synaptic weight <i>w<sub>2</sub></i> at <i>t</i> = 30 min.</p

    The retention of cell assemblies with Hebbian and symmetric STDP windows.

    No full text
    <p>(<b>A</b>) An asymmetric STDP window was calculated for <i>J<sub>ij</sub><sup>EE</sup></i> = 0.15. (<b>B</b>) The retention time significantly varies with the release probability of STD. We defined the retention time as a period with a sufficiently large relative weights: <i>w<sub>p</sub></i>>0.1<i>J<sub>EE</sub></i>. (<b>C</b>) Raster plot of spiking activity is shown for the Hebbian STDP rule shown in <b>A</b>. (<b>D</b>) A symmetric STDP window was calculated for <i>J<sub>ij</sub><sup>EE</sup></i> = 0.15. (<b>E</b>) Dynamics of the average synaptic weights at <i>u<sub>sd</sub></i> = 0.2 within (blue) and between (black) assemblies. (<b>F</b>) Raster plot of spiking activity for the symmetric STDP rule shown in <b>D</b>. (<b>G</b>) Relationship between the release probability <i>u<sub>sd</sub></i> and relative weight <i>w<sub>p</sub></i> at <i>t</i> = 30 min. (<b>H</b>) (top) We constructed a histogram of the number of activation over all cell assemblies shown in <b>F</b>. The abscissa shows the number of activation of each assembly normalized by the average number of activation of all assemblies. (middle) We calculated a histogram for the occurrence of all possible 20 (5×4) sequential transitions between two assemblies. The occurrence number of each transition was normalized by the average occurrence number over all transitions. (bottom) Histograms of triplet transitions, such as assembly 1 → 2 → 1 (left) and 1 → 2 → 3 (right), are shown after a normalization by all possible 80 (5×4+5×4×3) triplet transition patterns. All three histograms are obtained from the results of five simulation trials.</p
    corecore