16,486 research outputs found

    Fractionally Predictive Spiking Neurons

    Full text link
    Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing 201

    Time Resolution Dependence of Information Measures for Spiking Neurons: Atoms, Scaling, and Universality

    Full text link
    The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., Ď„\tau-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.Comment: 20 pages, 6 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/trdctim.ht

    Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model

    Full text link
    The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.Comment: 11 pages, 5 figures, v5 is the final version published on Scientific Reports journa

    Biologically plausible deep learning -- but how far can we go with shallow networks?

    Get PDF
    Training deep neural networks with the error backpropagation algorithm is considered implausible from a biological perspective. Numerous recent publications suggest elaborate models for biologically plausible variants of deep learning, typically defining success as reaching around 98% test accuracy on the MNIST data set. Here, we investigate how far we can go on digit (MNIST) and object (CIFAR10) classification with biologically plausible, local learning rules in a network with one hidden layer and a single readout layer. The hidden layer weights are either fixed (random or random Gabor filters) or trained with unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by local learning rules. The readout layer is trained with a supervised, local learning rule. We first implement these models with rate neurons. This comparison reveals, first, that unsupervised learning does not lead to better performance than fixed random projections or Gabor filters for large hidden layers. Second, networks with localized receptive fields perform significantly better than networks with all-to-all connectivity and can reach backpropagation performance on MNIST. We then implement two of the networks - fixed, localized, random & random Gabor filters in the hidden layer - with spiking leaky integrate-and-fire neurons and spike timing dependent plasticity to train the readout layer. These spiking models achieve > 98.2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation. The performance of our shallow network models is comparable to most current biologically plausible models of deep learning. Furthermore, our results with a shallow spiking network provide an important reference and suggest the use of datasets other than MNIST for testing the performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure

    Repeating Spatial-Temporal Motifs of CA3 Activity Dependent on Engineered Inputs from Dentate Gyrus Neurons in Live Hippocampal Networks.

    Get PDF
    Anatomical and behavioral studies, and in vivo and slice electrophysiology of the hippocampus suggest specific functions of the dentate gyrus (DG) and the CA3 subregions, but the underlying activity dynamics and repeatability of information processing remains poorly understood. To approach this problem, we engineered separate living networks of the DG and CA3 neurons that develop connections through 51 tunnels for axonal communication. Growing these networks on top of an electrode array enabled us to determine whether the subregion dynamics were separable and repeatable. We found spontaneous development of polarized propagation of 80% of the activity in the native direction from DG to CA3 and different spike and burst dynamics for these subregions. Spatial-temporal differences emerged when the relationships of target CA3 activity were categorized with to the number and timing of inputs from the apposing network. Compared to times of CA3 activity when there was no recorded tunnel input, DG input led to CA3 activity bursts that were 7Ă— more frequent, increased in amplitude and extended in temporal envelope. Logistic regression indicated that a high number of tunnel inputs predict CA3 activity with 90% sensitivity and 70% specificity. Compared to no tunnel input, patterns of >80% tunnel inputs from DG specified different patterns of first-to-fire neurons in the CA3 target well. Clustering dendrograms revealed repeating motifs of three or more patterns at up to 17 sites in CA3 that were importantly associated with specific spatial-temporal patterns of tunnel activity. The number of these motifs recorded in 3 min was significantly higher than shuffled spike activity and not seen above chance in control networks in which CA3 was apposed to CA3 or DG to DG. Together, these results demonstrate spontaneous input-dependent repeatable coding of distributed activity in CA3 networks driven by engineered inputs from DG networks. These functional configurations at measured times of activation (motifs) emerge from anatomically accurate feed-forward connections from DG through tunnels to CA3
    • …
    corecore