52 research outputs found

    Nonlinear slow-timescale mechanisms in synaptic plasticity

    Get PDF
    Learning and memory rely on synapses changing their strengths in response to neural activity. However, there is a substantial gap between the timescales of neural electrical dynamics (1-100 ms) and organism behaviour during learning (seconds-minutes). What mechanisms bridge this timescale gap? What are the implications for theories of brain learning? Here I first cover experimental evidence for slow-timescale factors in plasticity induction. Then I review possible underlying cellular and synaptic mechanisms, and insights from recent computational models that incorporate such slow-timescale variables. I conclude that future progress in understanding brain learning across timescales will require both experimental and computational modelling studies that map out the nonlinearities implemented by both fast and slow plasticity mechanisms at synapses, and crucially, their joint interactions. [Abstract copyright: Copyright © 2023 Elsevier Ltd. All rights reserved.

    Implications of stochastic ion channel gating and dendritic spine plasticity for neural information processing and storage

    Get PDF
    On short timescales, the brain represents, transmits, and processes information through the electrical activity of its neurons. On long timescales, the brain stores information in the strength of the synaptic connections between its neurons. This thesis examines the surprising implications of two separate, well documented microscopic processes — the stochastic gating of ion channels and the plasticity of dendritic spines — for neural information processing and storage. Electrical activity in neurons is mediated by many small membrane proteins called ion channels. Although single ion channels are known to open and close stochastically, the macroscopic behaviour of populations of ion channels are often approximated as deterministic. This is based on the assumption that the intrinsic noise introduced by stochastic ion channel gating is so weak as to be negligible. In this study we take advantage of newly developed efficient computer simulation methods to examine cases where this assumption breaks down. We find that ion channel noise can mediate spontaneous action potential firing in small nerve fibres, and explore its possible implications for neuropathic pain disorders of peripheral nerves. We then characterise the magnitude of ion channel noise for single neurons in the central nervous system, and demonstrate through simulation that channel noise is sufficient to corrupt synaptic integration, spike timing and spike reliability in dendritic neurons. The second topic concerns neural information storage. Learning and memory in the brain has long been believed to be mediated by changes in the strengths of synaptic connections between neurons — a phenomenon termed synaptic plasticity. Most excitatory synapses in the brain are hosted on small membrane structures called dendritic spines, and plasticity of these synapses is dependent on calcium concentration changes within the dendritic spine. In the last decade, it has become clear that spines are highly dynamic structures that appear and disappear, and can shrink and enlarge on rapid timescales. It is also clear that this spine structural plasticity is intimately linked to synaptic plasticity. Small spines host weak synapses, and large spines host strong synapses. Because spine size is one factor which determines synaptic calcium concentration, it is likely that spine structural plasticity influences the rules of synaptic plasticity. We theoretically study the consequences of this observation, and find that different spine-size to synaptic-strength relationships can lead to qualitative differences in long-term synaptic strength dynamics and information storage. This novel theory unifies much existing disparate data, including the unimodal distribution of synaptic strength, the saturation of synaptic plasticity, and the stability of strong synapses

    Adaptive Estimators Show Information Compression in Deep Neural Networks

    Full text link
    To improve how neural networks function it is crucial to understand their learning process. The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task. However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions. In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression. In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions. Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions. With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions. We also find that there is a large amount of variation in compression between different network initializations. Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting. Finally, we show that only compression of the last layer is positively correlated with generalization.Comment: Accepted as a poster presentation at ICLR 2019 and reviewed on OpenReview (available at https://openreview.net/forum?id=SkeZisA5t7). Pages: 11. Figures:

    Heterosynaptic plasticity rules induce small-world network topologies

    Get PDF
    Heterosynaptic plasticity is a form of ‘off-target’ synaptic plasticity where unstimulated synapses change strength. Here we propose that one purpose of heterosynaptic plasticity is to encourage small-world connectivity [6, 7]. We compare different plasticity rules in abstract weighted graphs, finding that they yield distinct network architectures

    Neural circuit function redundancy in brain disorders

    Get PDF
    Redundancy is a ubiquitous property of the nervous system. This means that vastly different configurations of cellular and synaptic components can enable the same neural circuit functions. However, until recently, very little brain disorder research has considered the implications of this characteristic when designing experiments or interpreting data. Here, we first summarise the evidence for redundancy in healthy brains, explaining redundancy and three related sub-concepts: sloppiness, dependencies and multiple solutions. We then lay out key implications for brain disorder research, covering recent examples of redundancy effects in experimental studies on psychiatric disorders. Finally, we give predictions for future experiments based on these concepts

    Signatures of Bayesian inference emerge from energy efficient synapses

    Full text link
    Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANN) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with pre-existing experimental data. Specifically, synapses with lower variability tended to have 1) higher input firing rates and 2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performance-reliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energy efficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.Comment: 29 pages, 11 figure

    Topological and simplicial features in reservoir computing networks

    Get PDF
    Reservoir computing is a framework which uses the nonlinearinternal dynamics of a recurrent neural network to perform complexnon-linear transformations of the input. This enables reservoirs tocarry out a variety of tasks involving the processing of time-dependent orsequential-based signals. Reservoirs are particularly suited for tasks thatrequire memory or the handling of temporal sequences, common in areassuch as speech recognition, time series prediction, and signal processing.Learning is restricted to the output layer and can be thought of as“reading out” or “selecting from” the states of the reservoir. With all butthe output weights fixed they do not have the costly and difficult trainingassociated with deep neural networks. However, while the reservoircomputing framework shows a lot of promise in terms of efficiency andcapability, it can be unreliable. Existing studies show that small changesin hyperparameters can markedly affect the network’s performance. Herewe studied the role of network topologies in reservoir computing in thecarrying out of three conceptually different tasks: working memory, perceptualdecision making, and chaotic time-series prediction. We implementedthree different network topologies (ring, lattice, and random)and tested reservoir network performances on the tasks. We then usedalgebraic topological tools of directed simplicial cliques to study deeperconnections between network topology and function, making comparisonsacross performance and linking with existing reservoir research

    Random and biological network connectivity for reservoir computing:Random Reservoirs Rule! (at Remembering)

    Get PDF
    Reservoir computing is a framework where a fixed recurrent neural network (RNN) is used to process input signals and perform computations. Reservoirs are typically randomly initialised, but it is not fully known how connectivity affects performance, and whether particular structures might yield advantages on specific or generic tasks. Simpler topologies often perform equally well as more complex networks on prediction tasks. We check performance differences of reservoirs on four task types using the connectomes of C. elegans and drosophila larval mushroom body in comparison with varying degrees of randomisation
    • 

    corecore