4,703 research outputs found

    How active perception and attractor dynamics shape perceptual categorization: A computational model

    Get PDF
    We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent–environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as ‘‘evidence’’ for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.Peer reviewe

    Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network.

    Get PDF
    Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays

    Biologically plausible attractor networks

    Get PDF
    Attractor networks have shownmuch promise as a neural network architecture that can describe many aspects of brain function. Much of the field of study around these networks has coalesced around pioneering work done by John Hoprield, and therefore many approaches have been strongly linked to the field of statistical physics. In this thesis I use existing theoretical and statistical notions of attractor networks, and introduce several biologically inspired extensions to an attractor network for which a mean-field solution has been previously derived. This attractor network is a computational neuroscience model that accounts for decision-making in the situation of two competing stimuli. By basing our simulation studies on such a network, we are able to study situations where mean- field solutions have been derived, and use these as the starting case, which we then extend with large scale integrate-and-fire attractor network simulations. The simulations are large enough to provide evidence that the results apply to networks of the size found in the brain. One factor that has been highlighted by previous research to be very important to brain function is that of noise. Spiking-related noise is seen to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex, and this thesis aims to measure the effects of noise on biologically plausible attractor networks. Our results are obtained using a spiking neural network made up of integrate-and-fire neurons, and we focus our results on the stochastic transition that this network undergoes. In this thesis we examine two such processes that are biologically relevant, but for which no mean-field solutions yet exist: graded firing rates, and diluted connectivity. Representations in the cortex are often graded, and we find that noise in these networks may be larger than with binary representations. In further investigations it was shown that diluted connectivity reduces the effects of noise in the situation where the number of synapses onto each neuron is held constant. In this thesis we also use the same attractor network framework to investigate the Communication through Coherence hypothesis. The Communication through Coherence hypothesis states that synchronous oscillations, especially in the gamma range, can facilitate communication between neural systems. It is shown that information transfer from one network to a second network occurs for a much lower strength of synaptic coupling between the networks than is required to produce coherence. Thus, information transmission can occur before any coherence is produced. This indicates that coherence is not needed for information transmission between coupled networks. This raises a major question about the Communication through Coherence hypothesis. Overall, the results provide substantial contributions towards understanding operation of attractor neuronal networks in the brain

    Attractor Metadynamics in Adapting Neural Networks

    Full text link
    Slow adaption processes, like synaptic and intrinsic plasticity, abound in the brain and shape the landscape for the neural dynamics occurring on substantially faster timescales. At any given time the network is characterized by a set of internal parameters, which are adapting continuously, albeit slowly. This set of parameters defines the number and the location of the respective adiabatic attractors. The slow evolution of network parameters hence induces an evolving attractor landscape, a process which we term attractor metadynamics. We study the nature of the metadynamics of the attractor landscape for several continuous-time autonomous model networks. We find both first- and second-order changes in the location of adiabatic attractors and argue that the study of the continuously evolving attractor landscape constitutes a powerful tool for understanding the overall development of the neural dynamics

    Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains

    Full text link
    Coherent neural spiking and local field potentials are believed to be signatures of the binding and transfer of information in the brain. Coherent activity has now been measured experimentally in many regions of mammalian cortex. Synfire chains are one of the main theoretical constructs that have been appealed to to describe coherent spiking phenomena. However, for some time, it has been known that synchronous activity in feedforward networks asymptotically either approaches an attractor with fixed waveform and amplitude, or fails to propagate. This has limited their ability to explain graded neuronal responses. Recently, we have shown that pulse-gated synfire chains are capable of propagating graded information coded in mean population current or firing rate amplitudes. In particular, we showed that it is possible to use one synfire chain to provide gating pulses and a second, pulse-gated synfire chain to propagate graded information. We called these circuits synfire-gated synfire chains (SGSCs). Here, we present SGSCs in which graded information can rapidly cascade through a neural circuit, and show a correspondence between this type of transfer and a mean-field model in which gating pulses overlap in time. We show that SGSCs are robust in the presence of variability in population size, pulse timing and synaptic strength. Finally, we demonstrate the computational capabilities of SGSC-based information coding by implementing a self-contained, spike-based, modular neural circuit that is triggered by, then reads in streaming input, processes the input, then makes a decision based on the processed information and shuts itself down
    corecore