355 research outputs found

    Can Attractor Network Models Account for the Statistics of Firing During Persistent Activity in Prefrontal Cortex?

    Get PDF
    Persistent activity observed in neurophysiological experiments in monkeys is thought to be the neuronal correlate of working memory. Over the last decade, network modellers have strived to reproduce the main features of these experiments. In particular, attractor network models have been proposed in which there is a coexistence between a non-selective attractor state with low background activity with selective attractor states in which sub-groups of neurons fire at rates which are higher (but not much higher) than background rates. A recent detailed statistical analysis of the data seems however to challenge such attractor models: the data indicates that firing during persistent activity is highly irregular (with an average CV larger than 1), while models predict a more regular firing process (CV smaller than 1). We discuss here recent proposals that allow to reproduce this feature of the experiments

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Inter-trial neuronal activity in inferior temporal cortex: a putative vehicle to generate long-term visual associations.

    Get PDF
    When monkeys perform a delayed match-to-sample task, some neurons in the anterior inferotemporal cortex show sustained activity following the presentation of specific visual stimuli, typically only those that are shown repeatedly. When sample stimuli are shown in a fixed temporal order, the few images that evoke delay activity in a given neuron are often neighboring stimuli in the sequence, suggesting that this delay activity may be the neural correlate of associative long-term memory. Here we report that stimulus-selective sustained activity is also evident following the presentation of the test stimulus in the same task. We use a neural network model to demonstrate that persistent stimulus-selective activity across the intertrial interval can lead to similar mnemonic representations (distributions of delay activity across the neural population) for neighboring visual stimuli. Thus, inferotemporal cortex may contain neural machinery for generating long-term stimulus-stimulus associations

    Internal Representation of Task Rules by Recurrent Dynamics: The Importance of the Diversity of Neural Responses

    Get PDF
    Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation

    How active perception and attractor dynamics shape perceptual categorization: A computational model

    Get PDF
    We propose a computational model of perceptual categorization that fuses elements of grounded and sensorimotor theories of cognition with dynamic models of decision-making. We assume that category information consists in anticipated patterns of agent–environment interactions that can be elicited through overt or covert (simulated) eye movements, object manipulation, etc. This information is firstly encoded when category information is acquired, and then re-enacted during perceptual categorization. The perceptual categorization consists in a dynamic competition between attractors that encode the sensorimotor patterns typical of each category; action prediction success counts as ‘‘evidence’’ for a given category and contributes to falling into the corresponding attractor. The evidence accumulation process is guided by an active perception loop, and the active exploration of objects (e.g., visual exploration) aims at eliciting expected sensorimotor patterns that count as evidence for the object category. We present a computational model incorporating these elements and describing action prediction, active perception, and attractor dynamics as key elements of perceptual categorizations. We test the model in three simulated perceptual categorization tasks, and we discuss its relevance for grounded and sensorimotor theories of cognition.Peer reviewe

    Short-Term Facilitation may Stabilize Parametric Working Memory Trace

    Get PDF
    Networks with continuous set of attractors are considered to be a paradigmatic model for parametric working memory (WM), but require fine tuning of connections and are thus structurally unstable. Here we analyzed the network with ring attractor, where connections are not perfectly tuned and the activity state therefore drifts in the absence of the stabilizing stimulus. We derive an analytical expression for the drift dynamics and conclude that the network cannot function as WM for a period of several seconds, a typical delay time in monkey memory experiments. We propose that short-term synaptic facilitation in recurrent connections significantly improves the robustness of the model by slowing down the drift of activity bump. Extending the calculation of the drift velocity to network with synaptic facilitation, we conclude that facilitation can slow down the drift by a large factor, rendering the network suitable as a model of WM

    Memories, attractors, space and vowels

    Get PDF
    Higher cognitive capacities, such as navigating complex environments or learning new languages, rely on the possibility to memorize, in the brain, continuous noisy variables. Memories are generally understood to be realized, e.g. in the cortex and in the hippocampus, as configurations of activity towards which specific populations of neurons are \u201cattracted\u201d, i.e towards which they dynamically converge, if properly cued. Distinct memories are thus considered as separate attractors of the dynamics, embedded within the same neuronal connectivity structure. But what if the underlying variables are continuous, such as a position in space or the resonant frequency of a phoneme? If such variables are continuous and the experience to be retained in memory has even a minimal temporal duration, highly correlated, yet imprecisely determined values of those variables will occur at successive time instants. And if memories are idealized as point-like in time, still distinct memories will be highly correlated. How does the brain self-organize to deal with noisy correlated memories? In this thesis, we try to approach the question along three interconnected itineraries. In Part II we first ask the opposite: we derive how many uncorrelated memories a network of neurons would be able to precisely store, as discrete attractors, if the neurons were optimally connected. Then, we compare the results with those obtained when memories are allowed to be retrieved imprecisely and connections are based on self-organization. We find that a simple strategy is available in the brain to facilitate the storage of memories: it amounts to making them more sparse, i.e. to silencing those neurons which are not very active in the configuration of activity to be memorized. We observe that the more the distribution of activity in the memory is complex, the more this strategy leads to store a higher number of memories, as compared with the maximal load in networks endowed with the theoretically optimal connection weights. In part III we ask, starting from experimental observations of spatially selective cells in quasi-realistic environments, how can the brain store, as a continuous attractor, complex and irregular spatial information. We find indications that while continuous attractors, per se, are too brittle to deal with irregularities, there seem to be other mathematical objects, which we refer to as quasi-attractive continuous manifolds, which may have this function. Such objects, which emerge as soon as a tiny amount of quenched irregularity is introduced in would-be continuous attractors, seem to persist over a wide range of noise levels and then break up, in a phase transition, when the variability reaches a critical threshold, lying just above that seen in the experimental measurements. Moreover, we find that the operational range is squeezed from behind, as it were, by a third phase, in which the spatially selective units cannot dynamically converge towards a localized state. Part IV, which is more exploratory, is motivated by the frequency characteristics of vowels. We hypothesize that also phonemes of different languages could be stored as separate fixed points in the brain through a sort of two-dimensional cognitive map. In our preliminary results, we show that a continuous quasi-attractor model, trained with noisy recorded vowels, can effectively learn them through a self-organized procedure and retrieve them separately, as fixed points on a quasi-attractive manifold. Overall, this thesis attempts to contribute to the search for general principles underlying memory, intended as an emergent collective property of networks in the brain, based on self-organization, imperfections and irregularities

    Controlling Chimeras

    Get PDF
    Coupled phase oscillators model a variety of dynamical phenomena in nature and technological applications. Non-local coupling gives rise to chimera states which are characterized by a distinct part of phase-synchronized oscillators while the remaining ones move incoherently. Here, we apply the idea of control to chimera states: using gradient dynamics to exploit drift of a chimera, it will attain any desired target position. Through control, chimera states become functionally relevant; for example, the controlled position of localized synchrony may encode information and perform computations. Since functional aspects are crucial in (neuro-)biology and technology, the localized synchronization of a chimera state becomes accessible to develop novel applications. Based on gradient dynamics, our control strategy applies to any suitable observable and can be generalized to arbitrary dimensions. Thus, the applicability of chimera control goes beyond chimera states in non-locally coupled systems

    Converging Neuronal Activity in Inferior Temporal Cortex during the Classification of Morphed Stimuli

    Get PDF
    How does the brain dynamically convert incoming sensory data into a representation useful for classification? Neurons in inferior temporal (IT) cortex are selective for complex visual stimuli, but their response dynamics during perceptual classification is not well understood. We studied IT dynamics in monkeys performing a classification task. The monkeys were shown visual stimuli that were morphed (interpolated) between pairs of familiar images. Their ability to classify the morphed images depended systematically on the degree of morph. IT neurons were selected that responded more strongly to one of the 2 familiar images (the effective image). The responses tended to peak ∼120 ms following stimulus onset with an amplitude that depended almost linearly on the degree of morph. The responses then declined, but remained above baseline for several hundred ms. This sustained component remained linearly dependent on morph level for stimuli more similar to the ineffective image but progressively converged to a single response profile, independent of morph level, for stimuli more similar to the effective image. Thus, these neurons represented the dynamic conversion of graded sensory information into a task-relevant classification. Computational models suggest that these dynamics could be produced by attractor states and firing rate adaptation within the population of IT neurons
    corecore