65 research outputs found

    Network Model of Spontaneous Activity Exhibiting Synchronous Transitions Between Up and Down States

    Get PDF
    Both in vivo and in vitro recordings indicate that neuronal membrane potentials can make spontaneous transitions between distinct up and down states. At the network level, populations of neurons have been observed to make these transitions synchronously. Although synaptic activity and intrinsic neuron properties play an important role, the precise nature of the processes responsible for these phenomena is not known. Using a computational model, we explore the interplay between intrinsic neuronal properties and synaptic fluctuations. Model neurons of the integrate-and-fire type were extended by adding a nonlinear membrane current. Networks of these neurons exhibit large amplitude synchronous spontaneous fluctuations that make the neurons jump between up and down states, thereby producing bimodal membrane potential distributions. The effect of sensory stimulation on network responses depends on whether the stimulus is applied during an up state or deeply inside a down state. External noise can be varied to modulate the network continuously between two extreme regimes in which it remains permanently in either the up or the down state

    Emergent Computations in Trained Artificial Neural Networks and Real Brains

    Full text link
    Synaptic plasticity allows cortical circuits to learn new tasks and to adapt to changing environments. How do cortical circuits use plasticity to acquire functions such as decision-making or working memory? Neurons are connected in complex ways, forming recurrent neural networks, and learning modifies the strength of their connections. Moreover, neurons communicate emitting brief discrete electric signals. Here we describe how to train recurrent neural networks in tasks like those used to train animals in neuroscience laboratories, and how computations emerge in the trained networks. Surprisingly, artificial networks and real brains can use similar computational strategies.Comment: International Summer School on Intelligent Signal Processing for Frontier Research and Industry, INFIERI 2021. Universidad Aut\'onoma de Madrid, Madrid, Spain. 23 August - 4 September 202

    Maximization of mutual information in a linear noisy network: a detailed study

    Get PDF
    We consider a linear, one-layer feedforward neural network performing a coding task. The goal of the network is to provide a statistical neural representation that conveys as much information as possible on the input stimuli in noisy conditions. We determine the family of synaptic couplings that maximizes the mutual information between input and output distribution. Optimization is performed under different constraints on the synaptic efficacies. We analyse the dependence of the solutions on input and output noises. This work goes beyond previous studies of the same problem in that: (i) we perform a detailed stability analysis in order to find the global maxima of the mutual information; (ii) we examine the properties of the optimal synaptic configurations under different constraints; (iii) and we do not assume translational invariance of the input data, as it is usually done when inputs are assumed to be visual stimuli

    Auto and crosscorrelograms for the spike response of LIF neurons with slow synapses

    Full text link
    An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons (LIFs) receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced in \cite{Mor+04}. These two functions define the firing variability and firing synchronization between neurons, and are of much importance for understanding neuron communication.Comment: 5 pages, 3 figure

    Response of Spiking Neurons to Correlated Inputs

    Full text link
    The effect of a temporally correlated afferent current on the firing rate of a leaky integrate-and-fire (LIF) neuron is studied. This current is characterized in terms of rates, auto and cross-correlations, and correlation time scale τc\tau_c of excitatory and inhibitory inputs. The output rate νout\nu_{out} is calculated in the Fokker-Planck (FP) formalism in the limit of both small and large τc\tau_c compared to the membrane time constant τ\tau of the neuron. By simulations we check the analytical results, provide an interpolation valid for all τc\tau_c and study the neuron's response to rapid changes in the correlation magnitude.Comment: 4 pages, 3 figure

    The mutual information of a stochastic binary channel: validity of the Replica Symmetry Ansatz

    Full text link
    We calculate the mutual information (MI) of a two-layered neural network with noiseless, continuous inputs and binary, stochastic outputs under several assumptions on the synaptic efficiencies. The interesting regime corresponds to the limit where the number of both input and output units is large but their ratio is kept fixed at a value α\alpha. We first present a solution for the MI using the replica technique with a replica symmetric (RS) ansatz. Then we find an exact solution for this quantity valid in a neighborhood of α=0\alpha = 0. An analysis of this solution shows that the system must have a phase transition at some finite value of α\alpha. This transition shows a singularity in the third derivative of the MI. As the RS solution turns out to be infinitely differentiable, it could be regarded as a smooth approximation to the MI. This is checked numerically in the validity domain of the exact solution.Comment: Latex, 29 pages, 2 Encapsulated Post Script figures. To appear in Journal of Physics

    Transform Invariant Recognition by Association in a Recurrent Network

    No full text
    Objects can be recognised independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In this work we explore the possibility that this memory trace is provided by sustained activity of neurons in layers of the visual pathway produced by an extensive recurrent connectivity. We describe a model that contains this high recurrent connectivity and synaptic efficacies built with contributions from associations between pairs of views that is simple enough to be treated analytically. The main result is that there is a change of behavior Permanent address: Departamento de F'isica Te'orica C-XI, Ciudad Universitaria de Cantoblanco, Universidad Aut'onoma de Madrid, 28049 Madrid, Spain as the strength of the association between views of the same object, relative to the association with..
    • …
    corecore