898 research outputs found

    Nonlinear Hebbian learning as a unifying principle in receptive field formation

    Get PDF
    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely Nonlinear Hebbian Learning. When Nonlinear Hebbian Learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities

    Extracting non-linear integrate-and-fire models from experimental data using dynamic I–V curves

    Get PDF
    The dynamic I–V curve method was recently introduced for the efficient experimental generation of reduced neuron models. The method extracts the response properties of a neuron while it is subject to a naturalistic stimulus that mimics in vivo-like fluctuating synaptic drive. The resulting history-dependent, transmembrane current is then projected onto a one-dimensional current–voltage relation that provides the basis for a tractable non-linear integrate-and-fire model. An attractive feature of the method is that it can be used in spike-triggered mode to quantify the distinct patterns of post-spike refractoriness seen in different classes of cortical neuron. The method is first illustrated using a conductance-based model and is then applied experimentally to generate reduced models of cortical layer-5 pyramidal cells and interneurons, in injected-current and injected- conductance protocols. The resulting low-dimensional neuron models—of the refractory exponential integrate-and-fire type—provide highly accurate predictions for spike-times. The method therefore provides a useful tool for the construction of tractable models and rapid experimental classification of cortical neurons

    Nonnormal amplification in random balanced neuronal networks

    Get PDF
    In dynamical models of cortical networks, the recurrent connectivity can amplify the input given to the network in two distinct ways. One is induced by the presence of near-critical eigenvalues in the connectivity matrix W, producing large but slow activity fluctuations along the corresponding eigenvectors (dynamical slowing). The other relies on W being nonnormal, which allows the network activity to make large but fast excursions along specific directions. Here we investigate the tradeoff between nonnormal amplification and dynamical slowing in the spontaneous activity of large random neuronal networks composed of excitatory and inhibitory neurons. We use a Schur decomposition of W to separate the two amplification mechanisms. Assuming linear stochastic dynamics, we derive an exact expression for the expected amount of purely nonnormal amplification. We find that amplification is very limited if dynamical slowing must be kept weak. We conclude that, to achieve strong transient amplification with little slowing, the connectivity must be structured. We show that unidirectional connections between neurons of the same type together with reciprocal connections between neurons of different types, allow for amplification already in the fast dynamical regime. Finally, our results also shed light on the differences between balanced networks in which inhibition exactly cancels excitation, and those where inhibition dominates.Comment: 13 pages, 7 figure

    Crossover between Levy and Gaussian regimes in first passage processes

    Get PDF
    We propose a new approach to the problem of the first passage time. Our method is applicable not only to the Wiener process but also to the non--Gaussian Leˊ\acute{\rm e}vy flights or to more complicated stochastic processes whose distributions are stable. To show the usefulness of the method, we particularly focus on the first passage time problems in the truncated Leˊ\acute{\rm e}vy flights (the so-called KoBoL processes), in which the arbitrarily large tail of the Leˊ\acute{\rm e}vy distribution is cut off. We find that the asymptotic scaling law of the first passage time tt distribution changes from t(α+1)/αt^{-(\alpha +1)/\alpha}-law (non-Gaussian Leˊ\acute{\rm e}vy regime) to t3/2t^{-3/2}-law (Gaussian regime) at the crossover point. This result means that an ultra-slow convergence from the non-Gaussian Leˊ\acute{\rm e}vy regime to the Gaussian regime is observed not only in the distribution of the real time step for the truncated Leˊ\acute{\rm e}vy flight but also in the first passage time distribution of the flight. The nature of the crossover in the scaling laws and the scaling relation on the crossover point with respect to the effective cut-off length of the Leˊ\acute{\rm e}vy distribution are discussed.Comment: 18pages, 7figures, using revtex4, to appear in Phys.Rev.

    An archive of good roads and racial capitalism in North Carolina

    Get PDF
    “Good Roads, leading to Winston-Salem, N.C.,” a postcard produced by the national retailer S.H. Kress & Co. to sell in its North Carolina five-and-dime stores, pulls us in different directions at once (Figure 1). The road, well-graded, neatly surfaced, and bright, is the dominant element of the frame, winding gently away from the viewer over the piedmont to the landscape’s vanishing point. In the foreground, an open-topped automobile, seemingly in motion, is about to zoom past, an ambiguous chauffeur at the wheel driving a finely dressed White lady, who appears elegant and composed in the back seat. The dark figures of what appear to be mules stand yoked to the right of the roadbed, while three tiny human figures can be discerned down the road beyond them, perhaps watching the second car go by. The lines of the road, suggesting mobility and flow, contrast with relative stasis of the forests and fields that the road bisects. In the face of this paradox, the image evokes a neat sense of order and harmony, a timelessness that, like more classical forms of landscape (Cosgrove, 1990), tends to erase the conditions of its production

    A bio-inspired image coder with temporal scalability

    Full text link
    We present a novel bio-inspired and dynamic coding scheme for static images. Our coder aims at reproducing the main steps of the visual stimulus processing in the mammalian retina taking into account its time behavior. The main novelty of this work is to show how to exploit the time behavior of the retina cells to ensure, in a simple way, scalability and bit allocation. To do so, our main source of inspiration will be the biologically plausible retina model called Virtual Retina. Following a similar structure, our model has two stages. The first stage is an image transform which is performed by the outer layers in the retina. Here it is modelled by filtering the image with a bank of difference of Gaussians with time-delays. The second stage is a time-dependent analog-to-digital conversion which is performed by the inner layers in the retina. Thanks to its conception, our coder enables scalability and bit allocation across time. Also, our decoded images do not show annoying artefacts such as ringing and block effects. As a whole, this article shows how to capture the main properties of a biological system, here the retina, in order to design a new efficient coder.Comment: 12 pages; Advanced Concepts for Intelligent Vision Systems (ACIVS 2011

    Spike neural models (part I): The Hodgkin-Huxley model

    Full text link

    Dynamical response of the Hodgkin-Huxley model in the high-input regime

    Full text link
    The response of the Hodgkin-Huxley neuronal model subjected to stochastic uncorrelated spike trains originating from a large number of inhibitory and excitatory post-synaptic potentials is analyzed in detail. The model is examined in its three fundamental dynamical regimes: silence, bistability and repetitive firing. Its response is characterized in terms of statistical indicators (interspike-interval distributions and their first moments) as well as of dynamical indicators (autocorrelation functions and conditional entropies). In the silent regime, the coexistence of two different coherence resonances is revealed: one occurs at quite low noise and is related to the stimulation of subthreshold oscillations around the rest state; the second one (at intermediate noise variance) is associated with the regularization of the sequence of spikes emitted by the neuron. Bistability in the low noise limit can be interpreted in terms of jumping processes across barriers activated by stochastic fluctuations. In the repetitive firing regime a maximization of incoherence is observed at finite noise variance. Finally, the mechanisms responsible for spike triggering in the various regimes are clearly identified.Comment: 14 pages, 24 figures in eps, submitted to Physical Review

    A comparative study of different integrate-and-fire neurons: spontaneous activity, dynamical response, and stimulus-induced correlation

    Full text link
    Stochastic integrate-and-fire (IF) neuron models have found widespread applications in computational neuroscience. Here we present results on the white-noise-driven perfect, leaky, and quadratic IF models, focusing on the spectral statistics (power spectra, cross spectra, and coherence functions) in different dynamical regimes (noise-induced and tonic firing regimes with low or moderate noise). We make the models comparable by tuning parameters such that the mean value and the coefficient of variation of the interspike interval match for all of them. We find that, under these conditions, the power spectrum under white-noise stimulation is often very similar while the response characteristics, described by the cross spectrum between a fraction of the input noise and the output spike train, can differ drastically. We also investigate how the spike trains of two neurons of the same kind (e.g. two leaky IF neurons) correlate if they share a common noise input. We show that, depending on the dynamical regime, either two quadratic IF models or two leaky IFs are more strongly correlated. Our results suggest that, when choosing among simple IF models for network simulations, the details of the model have a strong effect on correlation and regularity of the output.Comment: 12 page
    corecore