130 research outputs found

    Hot coffee: associative memory with bump attractor cell assemblies of spiking neurons

    Get PDF
    Networks of spiking neurons can have persistently firing stable bump attractors to represent continuous spaces (like temperature). This can be done with a topology with local excitatory synapses and local surround inhibitory synapses. Activating large ranges in the attractor can lead to multiple bumps, that show repeller and attractor dynamics; however, these bumps can be merged by overcoming the repeller dynamics. A simple associative memory can include these bump attractors, allowing the use of continuous variables in these memories, and these associations can be learned by Hebbian rules. These simulations are related to biological networks, showing that this is a step toward a more complete neural cognitive associative memory

    Multi-bump solutions in a neural field model with external inputs

    Get PDF
    "Available online 3 March 2016"We study the conditions for the formation of multiple regions of high activity or “bumps” in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to NN-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N>2N>2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.The work received financial support from FCT through a PhD grant (SFRH/BD/41179/2007) and from the EU-FP7 ITN project NETT: Neural Engineering Transformative Technologies (nr. 289146)

    Dopamine, affordance and active inference.

    Get PDF
    The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level

    What drives transient behaviour in complex systems?

    Full text link
    We study transient behaviour in the dynamics of complex systems described by a set of non-linear ODE's. Destabilizing nature of transient trajectories is discussed and its connection with the eigenvalue-based linearization procedure. The complexity is realized as a random matrix drawn from a modified May-Wigner model. Based on the initial response of the system, we identify a novel stable-transient regime. We calculate exact abundances of typical and extreme transient trajectories finding both Gaussian and Tracy-Widom distributions known in extreme value statistics. We identify degrees of freedom driving transient behaviour as connected to the eigenvectors and encoded in a non-orthogonality matrix T0T_0. We accordingly extend the May-Wigner model to contain a phase with typical transient trajectories present. An exact norm of the trajectory is obtained in the vanishing T0T_0 limit where it describes a normal matrix.Comment: 9 pages, 5 figure

    A unified neural model explaining optimal multi-guidance coordination in insect navigation

    Get PDF
    The robust navigation of insects arises from the coordinated action of concurrently functioning and interacting guidance systems. Computational models of specific brain regions can account for isolated behaviours such as path integration or route following, but the neural mechanisms by which their outputs are coordinated remains unknown. In this work, a functional modelling approach was taken to identify and model the elemental guidance subsystems required by homing insects. Then we produced realistic adaptive behaviours by integrating different guidance's outputs in a biologically constrained unified model mapped onto identified neural circuits. Homing paths are quantitatively and qualitatively compared with real ant data in a series of simulation studies replicating key infield experiments. Our analysis reveals that insects require independent visual homing and route following capabilities which we show can be realised by encoding panoramic skylines in the frequency domain, using image processing circuits in the optic lobe and learning pathways through the Mushroom Bodies (MB) and Anterior Optic Tubercle (AOTU) to Bulb (BU) respectively before converging in the Central Complex (CX) steering circuit. Further, we demonstrate that a ring attractor network inspired by firing patterns recorded in the CX can optimally integrate the outputs of path integration and visual homing systems guiding simulated ants back to their familiar route, and a simple non-linear weighting function driven by the output of the MB provides a context-dependent switch allowing route following strategies to dominate and the learned route retraced back to the nest when familiar terrain is encountered. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild. These results forward the case for a distributed architecture of the insect navigational toolkit. This unified model then be further validated by modelling the olfactory navigation of flies and ants. With simple adaptions of the sensory inputs, this model reproduces the main characteristics of the observed behavioural data, further demonstrating the useful role played by sensory-processing to CX to motor pathway in generating context-dependent coordination behaviours. In addition, this model help to complete the unified model of insect navigation by adding the olfactory cues that is one of the most crucial cues for insects

    Stochastic neural fields as gradient dynamical systems

    Get PDF
    Continuous attractor neural networks are used extensively to model a variety of experimentally observed coherent brain states, ranging from cortical waves of activity to stationary activity bumps. The latter are thought to play an important role in various forms of neural information processing, including population coding in primary visual cortex (V1) and working memory in prefrontal cortex. However, one limitation of continuous attractor networks is that the location of the peak of an activity bump (or wave) can diffuse due to intrinsic network noise. This reflects marginal stability of bump solutions with respect to the action of an underlying continuous symmetry group. Previous studies have used perturbation theory to derive an approximate stochastic differential equation for the location of the peak (phase) of the bump. Although this method captures the diffusive wandering of a bump solution, it ignores fluctuations in the amplitude of the bump. In this paper, we show how amplitude fluctuations can be analyzed by reducing the underlying stochastic neural field equation to a finite-dimensional stochastic gradient dynamical system that tracks the stochastic motion of both the amplitude and phase of bump solutions. This allows us to derive exact expressions for the steady-state probability density and its moments, which are then used to investigate two major issues: (i) the input-dependent suppression of neural variability and (ii) noise-induced transitions to bump extinction. We develop the theory by considering the particular example of a ring attractor network with SO(2) symmetry, which is the most common architecture used in attractor models of working memory and population tuning in V1. However, we also extend the analysis to a higher-dimensional spherical attractor network with SO ( 3 ) symmetry which has previously been proposed as a model of orientation and spatial frequency tuning in V1. We thus establish how a combination of stochastic analysis and group theoretic methods provides a powerful tool for investigating the effects of noise in continuous attractor networks
    corecore