29 research outputs found

    Impact of Second Messenger Modulation on Activity-Dependent and Basal Properties of Excitatory Synapses

    Get PDF
    Cognitive processing in the central nervous system relies on accurate information propagation; neurotransmission is the fundamental mechanism underlying network information flow. Because network information is coded by the timing and the strength of neuronal activity, synaptic properties that translate neuronal activity into synaptic output profoundly determine the precision of information transfer. Synaptic properties are in turn shaped by changes in network activity to ensure appropriate synaptic output. Activity-dependent adjustment of synaptic properties is often initiated by second messenger signals. Understanding how second messengers sculpt synaptic properties and produce changes in synaptic output is key for elucidating the interplay between network activity and synaptic properties. We studied the effect of second messenger modification on activity-dependent and static properties of rat hippocampal excitatory synapses using electrophysiological and optical approaches. We focused on two second-messenger pathways that potentiate transmission: cAMP and diacyl glycerol: DAG) signals. In parallel, we also compared the effects of manipulating calcium influx, which is known to potentiate synaptic transmission through increasing release probability: Pr). During high frequency stimulation, we found that both cAMP and DAG signals potentiated phasic transmission, as previously characterized. In parallel with increasing phasic transmission, the modulators also enhanced high-frequency associated asynchronous transmission, which emerges late during stimulus trains and is relatively long-lasting. However, such parallel potentiation of phasic and asynchronous transmission was not seen in elevated calcium; high calcium preferentially promoted asynchronous transmission. With low frequency stimulation, we found that cAMP and high calcium enhanced synaptic output by potentiating synapses with basally high Pr. Conversely, DAG signals recruited neurotransmission from both high Pr and low Pr terminals, which include presynaptically quiescent synapses. Taken together, these results suggest that second messenger modulation of synapses differentially shapes the static properties of the synapses; second messengers also fine-tune activity-dependent synaptic responses differently from manipulating calcium influx. These results likely have physiological relevance to second messenger-dependent sculpting of temporal and spatial synaptic properties

    Structure, Dynamics and Self-Organization in Recurrent Neural Networks: From Machine Learning to Theoretical Neuroscience

    Get PDF
    At a first glance, artificial neural networks, with engineered learning algorithms and carefully chosen nonlinearities, are nothing like the complicated self-organized spiking neural networks studied by theoretical neuroscientists. Yet, both adapt to their inputs, keep information from the past in their state space and are able of learning, implying that some information processing principles should be common to both. In this thesis we study those principles by incorporating notions of systems theory, statistical physics and graph theory into artificial neural networks and theoretical neuroscience models. % TO DO: What is different in this thesis? -> classical signal processing with complex systems on top The starting point for this thesis is \ac{RC}, a learning paradigm used both in machine learning\cite{jaeger2004harnessing} and in theoretical neuroscience\cite{maass2002real}. A neural network in \ac{RC} consists of two parts, a reservoir – a directed and weighted network of neurons that projects the input time series onto a high dimensional space – and a readout which is trained to read the state of the neurons in the reservoir and combine them linearly to give the desired output. In classical \ac{RC}, the reservoir is randomly initialized and left untrained, which alleviates the training costs in comparison to other recurrent neural networks. However, this lack of training implies that reservoirs are not adapted to specific tasks and thus their performance is often lower than that of other neural networks. Our contribution has been to show how knowledge about a task can be integrated into the reservoir architecture, so that reservoirs can be tailored to specific problems without training. We do this design by identifying two features that are useful for machine learning: the memory of the reservoir and its power spectra. First we show that the correlations between neurons limit the capacity of the reservoir to retain traces of previous inputs, and demonstrate that those correlations are controlled by moduli of the eigenvalues of the adjacency matrix of the reservoir. Second, we prove that when the reservoir resonates at the frequencies that are present on the desired output signal, the performance of the readout increases. Knowing the features of the reservoir dynamics that we need, the next question is how to impose them. The simplest way to design a network with that resonates at a certain frequency is by adding cycles, which act as feedback loops, but this also induces correlations and hence memory modifications. To disentangle the frequencies and the memory design, we studied how the addition of cycles modifies the eigenvalues in the adjacency matrix of the network. Surprisingly, the shape of the eigenvalues is quite beautiful \cite{aceituno2019universal} and can be characterized using random matrix theory tools. Combining this knowledge with our result relating eigenvalues and correlations, we designed an heuristic that tailors reservoirs to specific tasks and showed that it improves upon state of the art \ac{RC} in three different machine learning tasks. Although this idea works in the machine learning version of \ac{RC}, there is one fundamental problem when we try to translate to the world of theoretical neuroscience: the proposed frequency adaptation requires prior knowledge of the task, which might not be plausible in a biological neural network. Therefore the following questions are whether those resonances can emerge by unsupervised learning, and which kind of learning rules would be required. Remarkably, these resonances can be induced by the well-known Spike Time-Dependent Plasticity (STDP) combined with homeostatic mechanisms. We show this by deriving two self-consistent equations: one where the activity of every neuron can be calculated from its synaptic weights and its external inputs and a second one where the synaptic weights can be obtained from the neural activity. By considering spatio-temporal symmetries in our inputs we obtained two families of solutions to those equations where a periodic input is enhanced by the neural network after STDP. This approach shows that periodic and quasiperiodic inputs can induce resonances that agree with the aforementioned \ac{RC} theory. Those results, although rigorous, are expressed on a language of statistical physics and cannot be easily tested or verified in real, scarce data. To make them more accessible to the neuroscience community we showed that latency reduction, a well-known effect of STDP\cite{song2000competitive} which has been experimentally observed \cite{mehta2000experience}, generates neural codes that agree with the self-consistency equations and their solutions. In particular, this analysis shows that metabolic efficiency, synchronization and predictions can emerge from that same phenomena of latency reduction, thus closing the loop with our original machine learning problem. To summarize, this thesis exposes principles of learning recurrent neural networks that are consistent with adaptation in the nervous system and also improve current machine learning methods. This is done by leveraging features of the dynamics of recurrent neural networks such as resonances and correlations in machine learning problems, then imposing the required dynamics into reservoir computing through control theory notions such as feedback loops and spectral analysis. Then we assessed the plausibility of such adaptation in biological networks, deriving solutions from self-organizing processes that are biologically plausible and align with the machine learning prescriptions. Finally, we relate those processes to learning rules in biological neurons, showing how small local adaptations of the spike times can lead to neural codes that are efficient and can be interpreted in machine learning terms

    Anatomy and physiology of the thick-tufted layer 5 pyramidal neuron

    Get PDF
    The thick-tufted layer 5 (TTL5) pyramidal neuron is one of the most extensively studied neuron types in the mammalian neocortex and has become a benchmark for understanding information processing in excitatory neurons. By virtue of having the widest local axonal and dendritic arborization, the TTL5 neuron encompasses various local neocortical neurons and thereby defines the dimensions of neocortical microcircuitry. The TTL5 neuron integrates input across all neocortical layers and is the principal output pathway funneling information flow to subcortical structures. Several studies over the past decades have investigated the anatomy, physiology, synaptology, and pathophysiology of the TTL5 neuron. This review summarizes key discoveries and identifies potential avenues of research to facilitate an integrated and unifying understanding on the role of a central neuron in the neocortex

    Functional Implications of Synaptic Spike Timing Dependent Plasticity and Anti-Hebbian Membrane Potential Dependent Plasticity

    Get PDF
    A central hypothesis of neuroscience is that the change of the strength of synaptic connections between neurons is the basis for learning in the animal brain. However, the rules underlying the activity dependent change as well as their functional consequences are not well understood. This thesis develops and investigates several different quantitative models of synaptic plasticity. In the first part, the Contribution Dynamics model of Spike Timing Dependent Plasticity (STDP) is presented. It is shown to provide a better fit to experimental data than previous models. Additionally, investigation of the response properties of the model synapse to oscillatory neuronal activity shows that synapses are sensitive to theta oscillations (4-10 Hz), which are known to boost learning in behavioral experiments. In the second part, a novel Membrane Potential Dependent Plasticity (MPDP) rule is developed, which can be used to train neurons to fire precisely timed output activity. Previously, this could only be achieved with artificial supervised learning rules, whereas MPDP is a local activity dependent mechanism that is supported by experimental results

    Computational models of intracellular signalling in cerebellar Purkinje cells

    Get PDF
    In spite of the regular and well-characterised anatomy of the cerebellum, its function is still not clear. To understand the function of the cerebellum, it is necessary to understand the behaviour of a single cerebellar Purkinje cell. The behaviour of Purkinje cells is determined by their intracellular calcium dynamics, and by the network of intracellular signalling molecules that control the calcium dynamics. The aim of this thesis is to contribute to an understanding of the intracellular signalling network that is linked to the activation of metabotropic glutamate receptors (mGluRs) in a cerebellar Purkinje cell. In the thesis, ten different computational models of the mGluR signalling network are mathematically analysed and numerically integrated. The main result of this thesis is that the mGluR signalling network can implement an adaptive time delay between the activation of the mGluRs by glutamate and the release of calcium from intracellular stores. The adaptation of the time de..

    Cellular forgetting, desensitisation, stress and aging in signalling networks. When do cells refuse to learn more?

    Full text link
    Recent findings show that single, non-neuronal cells are also able to learn signalling responses developing cellular memory. In cellular learning nodes of signalling networks strengthen their interactions e.g. by the conformational memory of intrinsically disordered proteins, protein translocation, miRNAs, lncRNAs, chromatin memory and signalling cascades. This can be described by a generalized, unicellular Hebbian learning process, where those signalling connections, which participate in learning, become stronger. Here we review those scenarios, where cellular signalling is not only repeated in a few times (when learning occurs), but becomes too frequent, too large, or too complex and overloads the cell. This leads to desensitisation of signalling networks by decoupling signalling components, receptor internalization, and consequent downregulation. These molecular processes are examples of anti-Hebbian learning and forgetting of signalling networks. Stress can be perceived as signalling overload inducing the desensitisation of signalling pathways. Aging occurs by the summative effects of cumulative stress downregulating signalling. We propose that cellular learning desensitisation, stress and aging may be placed along the same axis of more and more intensive (prolonged or repeated) signalling. We discuss how cells might discriminate between repeated and unexpected signals, and highlight the Hebbian and anti-Hebbian mechanisms behind the fold-change detection in the NF-\k{appa}B signalling pathway. We list drug design methods using Hebbian learning (such as chemically-induced proximity) and clinical treatment modalities inducing (cancer, drug allergies) desensitisation or avoiding drug-induced desensitisation. A better discrimination between cellular learning, desensitisation and stress may open novel directions in drug design, e.g., helping to overcome drug-resistance.Comment: 19 pages, 4 figure

    An investigation into the role of proteinase-activated receptor 2 on neuronal excitability and synaptic transmission in the hippocampus

    Get PDF
    Proteinase-activated receptor 2 (PAR-2) belongs to a novel family of G-protein coupled receptors that are unique in their activation mechanism by which a proteolytic cleavage at N-terminus by a proteinase reveals a ‘tethered ligand’ to activate the receptor. Albeit at a low level, PAR-2 is extensively expressed in normal and pathological brains, including the hippocampus. Qualitative studies into the expression of PAR-2 in several disease conditions, including ischaemia, HIV-associated dementia, Parkinson’s disease, Alzheimer’s disease, as well as multiple sclerosis, have suggested that PAR-2 plays either degenerative or protective role depending on in which cell type an increase in PAR-2 expression is observed. However, its potential roles in modulating neuronal excitability, synaptic transmission as well as network activities remain to be determined. Utilising the whole-cell patch clamp recording technique, I demonstrate, for the first time, that the activation of PAR-2 leads to a depolarisation of cultured hippocampal neurones following application of SLIGRL (100microM), a selective PAR-2 activating peptide (5.52 ± 1.48mV, n=16, P<0.05) and paradoxically a reduction of spontaneous action potential (AP) frequency (29.63 ± 5.03% of control, n=13, P<0.05). Pharmacological manipulation reveals that the PAR-2-mediated depolarisation is most likely dependent on astrocytic glutamate release, which takes effect on ionotropic glutamate receptors. In addition, an overt depression of synaptic transmission among the cultured neurones upon PAR-2 activation is more likely to cause the reduction of spontaneous APs. In further experiments, I show, for the first time, that the activation of PAR-2 induces a long term depression (LTD) of glutamatergic synaptic transmission at the Schaffer collateral-to-CA1 synapse in acute hippocampal slices following SLIGRL (100microM) application (80.75 ± 2.54% of control at 30 minute, n=12, P<0.05). Additionally, this novel form of LTD is independent of metabotropic glutamate receptors but mediated by NR2B subunit-containing N-methyl-D-aspartic acid (NMDA) receptors. It is also suggested from these experiments that glial-neuronal signalling is contributing to this novel form of LTD. In the final set of experiments, by monitoring field potentials in the stratum pyramidale of the CA3 area in acute hippocampal slices, I demonstrate that PAR-2 activation depresses the frequency of epileptiform activities induced by the application of 4-AP/0 Mg2+, an in vitro model of epilepsy (1.53 ± 0.21Hz to 1.18 ± 0.17Hz, n=13, P0.05, 100microM SLIGRL). In summary, in this thesis, I demonstrate that PAR-2 modulates neuronal excitability and depresses excitatory synaptic transmission in the hippocampus. These data indicate that PAR-2 may play a regulatory role in neuronal signalling at single cell level by controlling neuronal intrinsic properties, as well as at synaptic level by tuning excitatory synaptic strength, which ultimately affects global excitability in the neural circuits as a whole. Therefore, this investigation suggests a novel physiological/pathophysiological role for PAR-2 in the brain. These data may reveal valuable clues for the development of drugs targeting a novel and potentially promising candidate
    corecore