10 research outputs found

    Working memory dynamics and spontaneous activity in a flip-flop oscillations network model with a Milnor attractor

    Get PDF
    Many cognitive tasks require the ability to maintain and manipulate simultaneously several chunks of information. Numerous neurobiological observations have reported that this ability, known as the working memory, is associated with both a slow oscillation (leading to the up and down states) and the presence of the theta rhythm. Furthermore, during resting state, the spontaneous activity of the cortex exhibits exquisite spatiotemporal patterns sharing similar features with the ones observed during specific memory tasks. Here to enlighten neural implication of working memory under these complicated dynamics, we propose a phenomenological network model with biologically plausible neural dynamics and recurrent connections. Each unit embeds an internal oscillation at the theta rhythm which can be triggered during up-state of the membrane potential. As a result, the resting state of a single unit is no longer a classical fixed point attractor but rather the Milnor attractor, and multiple oscillations appear in the dynamics of a coupled system. In conclusion, the interplay between the up and down states and theta rhythm endows high potential in working memory operation associated with complexity in spontaneous activities

    Robust sequence storage in bistable oscillators

    No full text
    International audienceNanoscale devices, like magnetic tunnel junction, have rich dynamics, including oscillatory ones. Such components may relieve us of the burden of integrating nonlinear equations. We propose a phenomenological model of bistable oscillators and we show that in a network, it achieves robust storage of sequences

    Classes de dynamiques neuronales et correlations structurées par l'experience dans le cortex visuel.

    No full text
    Neuronal activity is often considered in cognitive neuroscience by the evoked response but most the energy used by the brain is devoted to the sustaining of ongoing dynamics in cortical networks. A combination of classification algorithms (K means, Hierarchical tree, SOM) is used on intracellular recordings of the primary visual cortex of the cat to define classes of neuronal dynamics and to compare it with the activity evoked by a visual stimulus. Those dynamics can be studied with simplified models (FitzHugh Nagumo, hybrid dynamical systems, Wilson Cowan) for which an analysis is presented. Finally, with simulations of networks composed of columns of spiking neurons, we study the ongoing dynamics in a model of the primary visual cortex and their effect on the response evoked by a stimulus. After a learning period during which visual stimuli are presented, waves of depolarization propagate through the network. The study of correlations in this network shows that the ongoing dynamics reflect the functional properties acquired during the learning period.L'activité neuronale est souvent considérée en neuroscience cognitive par la réponse évoquée mais l'essentiel de l'énergie consommée par le cerveau permet d'entretenir les dynamiques spontanées des réseaux corticaux. L'utilisation combinée d'algorithmes de classification (K means, arbre hirarchique, SOM) sur des enregistrements intracellulaires du cortex visuel primaire du chat nous permet de définir des classes de dynamiques neuronales et de les comparer l'activité évoquée par un stimulus visuel. Ces dynamiques peuvent être étudiées sur des systèmes simplifiés (FitzHugh-Nagumo, systèmes dynamiques hybrides, Wilson-Cowan) dont nous présentons l'analyse. Enfin, par des simulations de réseaux composés de colonnes de neurones, un modèle du cortex visuel primaire nous permet d'étudier les dynamiques spontanées et leur effet sur la réponse à un stimulus. Après une période d'apprentissage pendant laquelle des stimuli visuels sont presentés, des vagues de dépolarisation se propagent dans le réseau. L'étude des correlations dans ce réseau montre que les dynamiques spontanées reflètent les propriétés fonctionnelles acquises au cours de l'apprentissage

    Qualitative modeling of chaotic logical circuits and walking droplets: a dynamical systems approach

    Get PDF
    Logical circuits and wave-particle duality have been studied for most of the 20th century. During the current century scientists have been thinking differently about these well-studied systems. Specifically, there has been great interest in chaotic logical circuits and hydrodynamic quantum analogs. Traditional logical circuits are designed with minimal uncertainty. While this is straightforward to achieve with electronic logic, other logic families such as fluidic, chemical, and biological, naturally exhibit uncertainties due to their inherent nonlinearity. In recent years, engineers have been designing electronic logical systems via chaotic circuits. While traditional boolean circuits have easily determined outputs, which renders dynamical models unnecessary, chaotic logical circuits employ components that behave erratically for certain inputs. There has been an equally dramatic paradigm shift for studying wave-particle systems. In recent years, experiments with bouncing droplets (called walkers) on a vibrating fluid bath have shown that quantum analogs can be studied at the macro scale. These analogs help us ask questions about quantum mechanics that otherwise would have been inaccessible. They may eventually reveal some unforeseen properties of quantum mechanics that would close the gap between philosophical interpretations and scientific results. Both chaotic logical circuits and walking droplets have been modeled as differential equations. While many of these models are very good in reproducing the behavior observed in experiments, the equations are often too complex to analyze in detail and sometimes even too complex for tractable numerical solution. These problems can be simplified if the models are reduced to discrete dynamical systems. Fortunately, both systems are very naturally time-discrete. For the circuits, the states change very rapidly and therefore the information during the process of change is not of importance. And for the walkers, the position when a wave is produced is important, but the dynamics of the droplets in the air are not. This dissertation is an amalgam of results on chaotic logical circuits and walking droplets in the form of experimental investigations, mathematical modeling, and dynamical systems analysis. Furthermore, this thesis makes connections between the two topics and the various scientific disciplines involved in their studies

    Interpreting multi-stable behaviour in input-driven recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are computational models inspired by the brain. Although RNNs stand out as state-of-the-art machine learning models to solve challenging tasks as speech recognition, handwriting recognition, language translation, and others, they are plagued by the so-called vanishing/exploding gradient issue. This prevents us from training RNNs with the aim of learning long term dependencies in sequential data. Moreover, a problem of interpretability affects these models, known as the ``black-box issue'' of RNNs. We attempt to open the black box by developing a mechanistic interpretation of errors occurring during the computation. We do this from a dynamical system theory perspective, specifically building on the notion of Excitable Network Attractors. Our methodology is effective at least for those tasks where a number of attractors and a switching pattern between them must be learned. RNNs can be seen as massively large nonlinear dynamical systems driven by external inputs. When it comes to analytically investigate RNNs, often in the literature the input-driven property is neglected or dropped in favour of tight constraints on the input driving the dynamics, which do not match the reality of RNN applications. Trying to bridge this gap, we framed RNNs dynamics driven by generic input sequences in the context of nonautonomous dynamical system theory. This brought us to enquire deeply into a fundamental principle established for RNNs known as the echo state property (ESP). In particular, we argue that input-driven RNNs can be reliable computational models even without satisfying the classical ESP formulation. We prove a sort of input-driven fixed point theorem and exploit it to (i) demonstrate the existence and uniqueness of a global attracting solution for strongly (in amplitude) input-driven RNNs, (ii) deduce the existence of multiple responses for certain input signals which can be reliably exploited for computational purposes, and (iii) study the stability of attracting solutions w.r.t. input sequences. Finally, we highlight the active role of the input in determining qualitative changes in the RNN dynamics, e.g. the number of stable responses, in contrast to commonly known qualitative changes due to variations of model parameters

    A Unified Cognitive Model of Visual Filling-In Based on an Emergic Network Architecture

    Get PDF
    The Emergic Cognitive Model (ECM) is a unified computational model of visual filling-in based on the Emergic Network architecture. The Emergic Network was designed to help realize systems undergoing continuous change. In this thesis, eight different filling-in phenomena are demonstrated under a regime of continuous eye movement (and under static eye conditions as well). ECM indirectly demonstrates the power of unification inherent with Emergic Networks when cognition is decomposed according to finer-grained functions supporting change. These can interact to raise additional emergent behaviours via cognitive re-use, hence the Emergic prefix throughout. Nevertheless, the model is robust and parameter free. Differential re-use occurs in the nature of model interaction with a particular testing paradigm. ECM has a novel decomposition due to the requirements of handling motion and of supporting unified modelling via finer functional grains. The breadth of phenomenal behaviour covered is largely to lend credence to our novel decomposition. The Emergic Network architecture is a hybrid between classical connectionism and classical computationalism that facilitates the construction of unified cognitive models. It helps cutting up of functionalism into finer-grains distributed over space (by harnessing massive recurrence) and over time (by harnessing continuous change), yet simplifies by using standard computer code to focus on the interaction of information flows. Thus while the structure of the network looks neurocentric, the dynamics are best understood in flowcentric terms. Surprisingly, dynamic system analysis (as usually understood) is not involved. An Emergic Network is engineered much like straightforward software or hardware systems that deal with continuously varying inputs. Ultimately, this thesis addresses the problem of reduction and induction over complex systems, and the Emergic Network architecture is merely a tool to assist in this epistemic endeavour. ECM is strictly a sensory model and apart from perception, yet it is informed by phenomenology. It addresses the attribution problem of how much of a phenomenon is best explained at a sensory level of analysis, rather than at a perceptual one. As the causal information flows are stable under eye movement, we hypothesize that they are the locus of consciousness, howsoever it is ultimately realized
    corecore