846 research outputs found

    Learning Shapes Spontaneous Activity Itinerating over Memorized States

    Get PDF
    Learning is a process that helps create neural dynamical systems so that an appropriate output pattern is generated for a given input. Often, such a memory is considered to be included in one of the attractors in neural dynamical systems, depending on the initial neural state specified by an input. Neither neural activities observed in the absence of inputs nor changes caused in the neural activity when an input is provided were studied extensively in the past. However, recent experimental studies have reported existence of structured spontaneous neural activity and its changes when an input is provided. With this background, we propose that memory recall occurs when the spontaneous neural activity changes to an appropriate output activity upon the application of an input, and this phenomenon is known as bifurcation in the dynamical systems theory. We introduce a reinforcement-learning-based layered neural network model with two synaptic time scales; in this network, I/O relations are successively memorized when the difference between the time scales is appropriate. After the learning process is complete, the neural dynamics are shaped so that it changes appropriately with each input. As the number of memorized patterns is increased, the generated spontaneous neural activity after learning shows itineration over the previously learned output patterns. This theoretical finding also shows remarkable agreement with recent experimental reports, where spontaneous neural activity in the visual cortex without stimuli itinerate over evoked patterns by previously applied signals. Our results suggest that itinerant spontaneous activity can be a natural outcome of successive learning of several patterns, and it facilitates bifurcation of the network when an input is provided

    Computational study of resting state network dynamics

    Get PDF
    Lo scopo di questa tesi è quello di mostrare, attraverso una simulazione con il software The Virtual Brain, le più importanti proprietà della dinamica cerebrale durante il resting state, ovvero quando non si è coinvolti in nessun compito preciso e non si è sottoposti a nessuno stimolo particolare. Si comincia con lo spiegare cos’è il resting state attraverso una breve revisione storica della sua scoperta, quindi si passano in rassegna alcuni metodi sperimentali utilizzati nell’analisi dell’attività cerebrale, per poi evidenziare la differenza tra connettività strutturale e funzionale. In seguito, si riassumono brevemente i concetti dei sistemi dinamici, teoria indispensabile per capire un sistema complesso come il cervello. Nel capitolo successivo, attraverso un approccio ‘bottom-up’, si illustrano sotto il profilo biologico le principali strutture del sistema nervoso, dal neurone alla corteccia cerebrale. Tutto ciò viene spiegato anche dal punto di vista dei sistemi dinamici, illustrando il pionieristico modello di Hodgkin-Huxley e poi il concetto di dinamica di popolazione. Dopo questa prima parte preliminare si entra nel dettaglio della simulazione. Prima di tutto si danno maggiori informazioni sul software The Virtual Brain, si definisce il modello di network del resting state utilizzato nella simulazione e si descrive il ‘connettoma’ adoperato. Successivamente vengono mostrati i risultati dell’analisi svolta sui dati ricavati, dai quali si mostra come la criticità e il rumore svolgano un ruolo chiave nell'emergenza di questa attività di fondo del cervello. Questi risultati vengono poi confrontati con le più importanti e recenti ricerche in questo ambito, le quali confermano i risultati del nostro lavoro. Infine, si riportano brevemente le conseguenze che porterebbe in campo medico e clinico una piena comprensione del fenomeno del resting state e la possibilità di virtualizzare l’attività cerebrale

    How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. A model-based argumentation

    Get PDF
    This paper addresses two questions in the context of neuronal networks dynamics, using methods from dynamical systems theory and statistical physics: (i) How to characterize the statistical properties of sequences of action potentials ("spike trains") produced by neuronal networks ? and; (ii) what are the effects of synaptic plasticity on these statistics ? We introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). On this basis, we use the thermodynamic formalism from ergodic theory to show how Gibbs distributions are natural probability measures to describe the statistics of spike trains, given the empirical averages of prescribed quantities. As a second result, we show that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics.Comment: 39 pages, 3 figure

    Synchronization and locking in oscillators with flexible periods

    Get PDF
    Upon interaction with a stimulus sequence, an oscillator may assume the stimulus' period via a process called entrainment. Standard models of entrainment assume that the oscillator has a fixed natural period and, thus, a limited range of periods to which it can entrain. However, experiments have shown that some oscillating systems have flexible periods; that is, the period of the oscillator can be changed due to external stimuli, and this period persists when the stimulus is discontinued. Studying this type of coordination, Loehr et al. (2011) showed that the synchronization of pianists with a metronome can be described by a nonlinear oscillator model that is quantitatively described using a circle map of phase and period with sinusoidal coupling terms. Here we introduce two variants, termed the multiplicative and additive forced oscillator models, so-called based on their period descriptions. Unlike the Loehr et al. model, these models include a preferred period, as most biological oscillating systems will oscillate at a fixed natural period when not experiencing driving or damping forces. This study focuses on the stability of points of N:M locking, a complex type of entrainment in which the phase of a model rotates N times in response to M stimuli. Locking types investigated here are 1:1, 1:2, 2:3, along with their reciprocals. We identify numerous parameter regimes of multi-stability, and how such regions evolve with changes in preferred period elasticity. Such multi-stability is not generally possible without a malleable period. The basins of attraction of the various types of N:M locking are investigated, with observations of fractal behavior and remarks on how the domains of attraction depend on coupling and elasticity parameters. Finally, we compare and contrast the multiplicative and additive models with other models of synchronization and beat-keeping

    Nonlinear dynamics of pattern recognition and optimization

    Get PDF
    We associate learning in living systems with the shaping of the velocity vector field of a dynamical system in response to external, generally random, stimuli. We consider various approaches to implement a system that is able to adapt the whole vector field, rather than just parts of it - a drawback of the most common current learning systems: artificial neural networks. This leads us to propose the mathematical concept of self-shaping dynamical systems. To begin, there is an empty phase space with no attractors, and thus a zero velocity vector field. Upon receiving the random stimulus, the vector field deforms and eventually becomes smooth and deterministic, despite the random nature of the applied force, while the phase space develops various geometrical objects. We consider the simplest of these - gradient self-shaping systems, whose vector field is the gradient of some energy function, which under certain conditions develops into the multi-dimensional probability density distribution of the input. We explain how self-shaping systems are relevant to artificial neural networks. Firstly, we show that they can potentially perform pattern recognition tasks typically implemented by Hopfield neural networks, but without any supervision and on-line, and without developing spurious minima in the phase space. Secondly, they can reconstruct the probability density distribution of input signals, like probabilistic neural networks, but without the need for new training patterns to have to enter the network as new hardware units. We therefore regard self-shaping systems as a generalisation of the neural network concept, achieved by abandoning the "rigid units - flexible couplings'' paradigm and making the vector field fully flexible and amenable to external force. It is not clear how such systems could be implemented in hardware, and so this new concept presents an engineering challenge. It could also become an alternative paradigm for the modelling of both living and learning systems. Mathematically it is interesting to find how a self shaping system could develop non-trivial objects in the phase space such as periodic orbits or chaotic attractors. We investigate how a delayed vector field could form such objects. We show that this method produces chaos in a class systems which have very simple dynamics in the non-delayed case. We also demonstrate the coexistence of bounded and unbounded solutions dependent on the initial conditions and the value of the delay. Finally, we speculate about how such a method could be used in global optimization

    Deformability-induced effects of red blood cells in flow

    Get PDF
    To ensure a proper health state in the human body, a steady transport of blood is necessary. As the main cellular constituent in the blood suspension, red blood cells (RBCs) are governing the physical properties of the entire blood flow. Remarkably, these RBCs can adapt their shape to the prevailing surrounding flow conditions, ultimately allowing them to pass through narrow capillaries smaller than their equilibrium diameter. However, several diseases such as diabetes mellitus or malaria are linked to an alteration of the deformability. In this work, we investigate the shapes of RBCs in microcapillary flow in vitro, culminating in a shape phase diagram of two distinct, hydrodynamically induced shapes, the croissant and the slipper. Due to the simplicity of the RBC structure, the obtained phase diagram leads to further insights into the complex interaction between deformable objects in general, such as vesicles, and the surrounding fluid. Furthermore, the phase diagram is highly correlated to the deformability of the RBCs and represents thus a cornerstone of a potential diagnostic tool to detect pathological blood parameters. To further promote this idea, we train a convolutional neural network (CNN) to classify the distinct RBC shapes. The benchmark of the CNN is validated by manual classification of the cellular shapes and yields very good performance. In the second part, we investigate an effect that is associated with the deformability of RBCs, the lingering phenomenon. Lingering events may occur at bifurcation apices and are characterized by a straddling of RBCs at an apex, which have been shown in silico to cause a piling up of subsequent RBCs. Here, we provide insight into the dynamics of such lingering events in vivo, which we consequently relate to the partitioning of RBCs at bifurcating vessels in the microvasculature. Specifically, the lingering of RBCs causes an increased intercellular distance to RBCs further downstream, and thus, a reduced hematocrit.Um die biologischen Funktionen im menschlichen Körper aufrechtzuerhalten ist eine stetige Versorgung mit Blut notwendig. Rote Blutzellen bilden den Hauptanteil aller zellulären Komponenten im Blut und beeinflussen somit maßgeblich dessen Fließeigenschaften. Eine bemerkenswerte Eigenschaft dieser roten Blutzellen ist ihre Deformierbarkeit, die es ihnen ermöglicht, ihre Form den vorherrschenden Strömungsbedingungen anzupassen und sogar durch Kapillaren zu strömen, deren Durchmesser kleiner ist als der Gleichgewichtsdurchmesser einer roten Blutzelle. Zahlreiche Erkrankungen wie beispielsweise Diabetes mellitus oder Malaria sind jedoch mit einer Veränderung dieser Deformierbarkeit verbunden. In der vorliegenden Arbeit untersuchen wir die hydrodynamisch induzierten Formen der roten Blutzellen in mikrokapillarer Strömung in vitro systematisch für verschiedene Fließgeschwindigkeiten. Aus diesen Daten erzeugen wir ein Phasendiagramm zweier charakteristischer auftretender Formen: dem Croissant und dem Slipper. Aufgrund der Einfachheit der Struktur der roten Blutzellen führt das erhaltene Phasendiagramm zu weiteren Erkenntnissen über die komplexe Interaktion zwischen deformierbaren Objekten im Allgemeinen, wie z.B. Vesikeln, und des sie umgebenden Fluids. Darüber hinaus ist das Phasendiagramm korreliert mit der Deformierbarkeit der Erythrozyten und stellt somit einen Eckpfeiler eines potentiellen Diagnosewerkzeugs zur Erkennung pathologischer Blutparameter dar. Um diese Idee weiter voranzutreiben, trainieren wir ein künstliches neuronales Netz, um die auftretenden Formen der Erythrozyten zu klassifizieren. Die Ausgabe dieses künstlichen neuronalen Netzes wird durch manuelle Klassifizierung der Zellformen validiert und weist eine sehr hohe Übereinstimmung mit dieser manuellen Klassifikation auf. Im zweiten Teil der Arbeit untersuchen wir einen Effekt, der sich direkt aus der Deformierbarkeit der roten Blutzellen ergibt, das Lingering-Phänomen. Diese Lingering-Ereignisse können an Bifurkationsscheiteln zweier benachbarter Kapillaren auftreten und sind durch ein längeres Verweilen von Erythrozyten an einem Scheitelpunkt gekennzeichnet. In Simulationen hat sich gezeigt, dass diese Dynamik eine Anhäufung von nachfolgenden roten Blutzellen verursacht. Wir analysieren die Dynamik solcher Verweilereignisse in vivo, die wir folglich mit der Aufteilung von Erythrozyten an sich gabelnden Gefäßen in der Mikrovaskulatur in Verbindung bringen. Insbesondere verursacht das Verweilen von Erythrozyten einen erhöhten interzellulären Abstand zu weiter stromabwärts liegenden Erythrozyten und damit einen reduzierten Hämatokrit

    Behavioural robustness and the distributed mechanisms hypothesis

    Get PDF
    A current challenge in neuroscience and systems biology is to better understand properties that allow organisms to exhibit and sustain appropriate behaviours despite the effects of perturbations (behavioural robustness). There are still significant theoretical difficulties in this endeavour, mainly due to the context-dependent nature of the problem. Biological robustness, in general, is considered in the literature as a property that emerges from the internal structure of organisms, rather than being a dynamical phenomenon involving agent-internal controls, the organism body, and the environment. Our hypothesis is that the capacity for behavioural robustness is rooted in dynamical processes that are distributed between agent ‘brain’, body, and environment, rather than warranted exclusively by organisms’ internal mechanisms. Distribution is operationally defined here based on perturbation analyses. Evolutionary Robotics (ER) techniques are used here to construct four computational models to study behavioural robustness from a systemic perspective. Dynamical systems theory provides the conceptual framework for these investigations. The first model evolves situated agents in a goalseeking scenario in the presence of neural noise perturbations. Results suggest that evolution implicitly selects neural systems that are noise-resistant during coupling behaviour by concentrating search in regions of the fitness landscape that retain functionality for goal approaching. The second model evolves situated, dynamically limited agents exhibiting minimalcognitive behaviour (categorization task). Results indicate a small but significant tendency toward better performance under most types of perturbations by agents showing further cognitivebehavioural dependency on their environments. The third model evolves experience-dependent robust behaviour in embodied, one-legged walking agents. Evidence suggests that robustness is rooted in both internal and external dynamics, but robust motion emerges always from the systemin-coupling. The fourth model implements a historically dependent, mobile-object tracking task under sensorimotor perturbations. Results indicate two different modes of distribution, one in which inner controls necessarily depend on a set of specific environmental factors to exhibit behaviour, then these controls will be more vulnerable to perturbations on that set, and another for which these factors are equally sufficient for behaviours. Vulnerability to perturbations depends on the particular distribution. In contrast to most existing approaches to the study of robustness, this thesis argues that behavioural robustness is better understood in the context of agent-environment dynamical couplings, not in terms of internal mechanisms. Such couplings, however, are not always the full determinants of robustness. Challenges and limitations of our approach are also identified for future studies
    corecore