3,857 research outputs found

    A biologically plausible model of time-scale invariant interval timing

    Get PDF
    The temporal durations between events often exert a strong influence over behavior. The details of this influence have been extensively characterized in behavioral experiments in different animal species. A remarkable feature of the data collected in these experiments is that they are often time-scale invariant. This means that response measurements obtained under intervals of different durations coincide when plotted as functions of relative time. Here we describe a biologically plausible model of an interval timing device and show that it is consistent with time-scale invariant behavior over a substantial range of interval durations. The model consists of a set of bistable units that switch from one state to the other at random times. We first use an abstract formulation of the model to derive exact expressions for some key quantities and to demonstrate time-scale invariance for any range of interval durations. We then show how the model could be implemented in the nervous system through a generic and biologically plausible mechanism. In particular, we show that any system that can display noise-driven transitions from one stable state to another can be used to implement the timing device. Our work demonstrates that a biologically plausible model can qualitatively account for a large body of data and thus provides a link between the biology and behavior of interval timing

    Slowness: An Objective for Spike-Timing-Dependent Plasticity?

    Get PDF
    Slow Feature Analysis (SFA) is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying signal. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the self-organized formation of place cells in a model of the hippocampus. In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons

    Biologically plausible deep learning -- but how far can we go with shallow networks?

    Get PDF
    Training deep neural networks with the error backpropagation algorithm is considered implausible from a biological perspective. Numerous recent publications suggest elaborate models for biologically plausible variants of deep learning, typically defining success as reaching around 98% test accuracy on the MNIST data set. Here, we investigate how far we can go on digit (MNIST) and object (CIFAR10) classification with biologically plausible, local learning rules in a network with one hidden layer and a single readout layer. The hidden layer weights are either fixed (random or random Gabor filters) or trained with unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by local learning rules. The readout layer is trained with a supervised, local learning rule. We first implement these models with rate neurons. This comparison reveals, first, that unsupervised learning does not lead to better performance than fixed random projections or Gabor filters for large hidden layers. Second, networks with localized receptive fields perform significantly better than networks with all-to-all connectivity and can reach backpropagation performance on MNIST. We then implement two of the networks - fixed, localized, random & random Gabor filters in the hidden layer - with spiking leaky integrate-and-fire neurons and spike timing dependent plasticity to train the readout layer. These spiking models achieve > 98.2% test accuracy on MNIST, which is close to the performance of rate networks with one hidden layer trained with backpropagation. The performance of our shallow network models is comparable to most current biologically plausible models of deep learning. Furthermore, our results with a shallow spiking network provide an important reference and suggest the use of datasets other than MNIST for testing the performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    Modeling Pharmacological Clock and Memory Patterns of Interval Timing in a Striatal Beat-Frequency Model with Realistic, Noisy Neurons

    Get PDF
    In most species, the capability of perceiving and using the passage of time in the seconds-to-minutes range (interval timing) is not only accurate but also scalar: errors in time estimation are linearly related to the estimated duration. The ubiquity of scalar timing extends over behavioral, lesion, and pharmacological manipulations. For example, in mammals, dopaminergic drugs induce an immediate, scalar change in the perceived time (clock pattern), whereas cholinergic drugs induce a gradual, scalar change in perceived time (memory pattern). How do these properties emerge from unreliable, noisy neurons firing in the milliseconds range? Neurobiological information relative to the brain circuits involved in interval timing provide support for an striatal beat frequency (SBF) model, in which time is coded by the coincidental activation of striatal spiny neurons by cortical neural oscillators. While biologically plausible, the impracticality of perfect oscillators, or their lack thereof, questions this mechanism in a brain with noisy neurons. We explored the computational mechanisms required for the clock and memory patterns in an SBF model with biophysically realistic and noisy Morris–Lecar neurons (SBF–ML). Under the assumption that dopaminergic drugs modulate the firing frequency of cortical oscillators, and that cholinergic drugs modulate the memory representation of the criterion time, we show that our SBF–ML model can reproduce the pharmacological clock and memory patterns observed in the literature. Numerical results also indicate that parameter variability (noise) – which is ubiquitous in the form of small fluctuations in the intrinsic frequencies of neural oscillators within and between trials, and in the errors in recording/retrieving stored information related to criterion time – seems to be critical for the time-scale invariance of the clock and memory patterns
    corecore