526 research outputs found
The Role of Constraints in Hebbian Learning
Models of unsupervised, correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamic effects of such constraints.
Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a âgradedâ receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is âsharpenedâ to a subset of maximally correlated inputs. If two equivalent input populations (e.g., two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances.
These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed
Slowness: An Objective for Spike-Timing-Dependent Plasticity?
Slow Feature Analysis (SFA) is an efficient algorithm for
learning input-output functions that extract the most slowly varying features from a quickly varying signal. It
has been successfully applied to the unsupervised learning
of translation-, rotation-, and other invariances in a
model of the visual system, to the learning of complex cell
receptive fields, and, combined with a sparseness
objective, to the self-organized formation of place cells
in a model of the hippocampus.
In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Molecular Players in Preserving Excitatory-Inhibitory Balance in the Brain
Information processing in the brain relies on a functional balance between excitation and inhibition, the disruption of which leads to network destabilization and many neurodevelopmental disorders, such as autism spectrum disorders. One of the homeostatic mechanisms that maintains the excitatory and inhibitory balance is called synaptic scaling: Neurons dynamically modulate postsynaptic receptor abundance through activity-dependent gene transcription and protein synthesis. In the first part of my thesis work, I discuss our findings that a chromatin reader protein L3mbtl1 is involved in synaptic scaling. We observed that knockout and knockdown of L3mbtl1 cause a lack of synaptic downscaling of glutamate receptors in hippocampal primary neurons and organotypic slice cultures. Genome-wide mapping of L3mbtl1 protein occupancies on chromatin identified Ctnnb1 and Gabra2 as downstream target genes of L3mbtl1-mediated transcriptional regulation. Importantly, partial knockdown of Ctnnb1 by itself prevents synaptic downscaling. Another aspect of maintaining E/I balance centers on GABAergic inhibitory neurons. In the next part of my thesis work, we address the role of the scaffold protein Shank1 in excitatory synapses onto inhibitory interneurons. We showed that parvalbumin-expressing interneurons lacking Shank1 display reduced excitatory synaptic inputs and decreased levels of inhibitory outputs to pyramidal neurons. As a consequence, pyramidal neurons in Shank1 mutant mice exhibit increased E/I ratio. This is accompanied by a reduced expression of an inhibitory synapse scaffolding protein gephyrin. These results provide novel insights into the roles of chromatin reader molecules and synaptic scaffold molecules in synaptic functions and neuronal homeostasis
Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic Dynamics: Towards a Brain-inspired InformationProcessing System: Modelling and Analysis ofSynaptic Dynamics
Biological neural systems (BNS) in general and the central nervous system (CNS) specifically
exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis
for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms
of information processing within the BNS and their computational capabilities is a core objective
of modern computer science, computational sciences and neuroscience. Among the main reasons
of this tendency to understand the brain is to help in improving the quality of life of people suffer
from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces
(BCI), neural prostheses and other similar approaches are potential solutions either to help these
patients through therapy or to push the progress in rehabilitation. There is however a significant
lack of knowledge regarding the basic information processing within the CNS. Without a better
understanding of the fundamental operations or sequences leading to cognitive abilities, applications
like BCI or neural prostheses will keep struggling to find a proper and systematic way to
help patients in this regard. In order to have more insights into these basic information processing
methods, this thesis presents an approach that makes a formal distinction between the essence
of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with
expert systems. This approach investigates the underlying mechanisms allowing the CNS to be
capable of performing a massive amount of computational tasks with a sustainable efficiency and
flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent.
The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a
biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities.
These capabilities can be imported into spiking neural networks (SNN) by emulating the
dynamic neural system. Emulating the dynamic system requires simulating both the inner workings
of the system and the framework of performing the information processing tasks. Thus, this
work comprises two main parts. The first part is concerned with introducing a proper and a novel
dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system.
This model represents a balanced integration between the needed biophysical details and being
computationally inexpensive. Being a biophysical model is important to allow for the abilities of
the target dynamic system to be inherited, and being simple is needed to allow for further implementation
in large scale simulations and for hardware implementation in the future. Besides, the
energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks
seeking for stable states of activities. The second part of the thesis is consequently concerned with
importing the processing framework of the dynamic system into the environment of SNN. This
part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The
concepts of computing with states are extended to investigate a computational model that is based
on the finite-state machines and reservoir computing. Biological plausible validations of the introduced
model and frameworks are performed. Results and discussions of these validations indicate
that this study presents a significant advance on the way of empowering the knowledge about the
mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on
how to adopt the biological computational capabilities in computation science in general and in
biologically-inspired spiking neural networks in specific. Large scale simulations and the development
of neuromorphic hardware are work-in-progress and future work. Among the applications
of the introduced work are neural prostheses and bionic automation systems
Recommended from our members
On the Role of Sensory Cancellation and Corollary Discharge in Neural Coding and Behavior
Studies of cerebellum-like circuits in fish have demonstrated that synaptic plasticity shapes the motor corollary discharge responses of granule cells into highly-specific predictions of self- generated sensory input. However, the functional significance of such predictions, known as negative images, has not been directly tested. Here we provide evidence for improvements in neural coding and behavioral detection of prey-like stimuli due to negative images. In addition, we find that manipulating synaptic plasticity leads to specific changes in circuit output that disrupt neural coding and detection of prey-like stimuli. These results link synaptic plasticity, neural coding, and behavior and also provide a circuit-level account of how combining external sensory input with internally-generated predictions enhances sensory processing. In addition, the mammalian dorsal cochlear nucleus (DCN) integrates auditory nerve input with a diverse array of sensory and motor signals processed within circuity similar to the cerebellum. Yet how the DCN contributes to early auditory processing has been a longstanding puzzle. Using electrophysiological recordings in mice during licking behavior we show that DCN neurons are largely unaffected by self-generated sounds while remaining sensitive to external acoustic stimuli. Recordings in deafened mice, together with neural activity manipulations, indicate that self-generated sounds are cancelled by non-auditory signals conveyed by mossy fibers. In addition, DCN neurons exhibit gradual reductions in their responses to acoustic stimuli that are temporally correlated with licking. Together, these findings suggest that DCN may act as an adaptive filter for cancelling self-generated sounds. Adaptive filtering has been established previously for cerebellum-like sensory structures in fish suggesting a conserved function for such structures across vertebrates
- âŚ