682 research outputs found

    Cortical free association dynamics: distinct phases of a latching network

    Full text link
    A Potts associative memory network has been proposed as a simplified model of macroscopic cortical dynamics, in which each Potts unit stands for a patch of cortex, which can be activated in one of S local attractor states. The internal neuronal dynamics of the patch is not described by the model, rather it is subsumed into an effective description in terms of graded Potts units, with adaptation effects both specific to each attractor state and generic to the patch. If each unit, or patch, receives effective (tensor) connections from C other units, the network has been shown to be able to store a large number p of global patterns, or network attractors, each with a fraction a of the units active, where the critical load p_c scales roughly like p_c ~ (C S^2)/(a ln(1/a)) (if the patterns are randomly correlated). Interestingly, after retrieving an externally cued attractor, the network can continue jumping, or latching, from attractor to attractor, driven by adaptation effects. The occurrence and duration of latching dynamics is found through simulations to depend critically on the strength of local attractor states, expressed in the Potts model by a parameter w. Here we describe with simulations and then analytically the boundaries between distinct phases of no latching, of transient and sustained latching, deriving a phase diagram in the plane w-T, where T parametrizes thermal noise effects. Implications for real cortical dynamics are briefly reviewed in the conclusions

    Exploring Language Mechanisms: The Mass-Count Distinction and The Potts Neural Network

    Get PDF
    The aim of this thesis is to explore language mechanisms in two aspects. First, the statistical properties of syntax and semantics, and second, the neural mechanisms which could be of possible use in trying to understand how the brain learns those particular statistical properties. In the first part of the thesis (part A) we focus our attention on a detailed statistical study of the syntax and semantics of the mass-count distinction in nouns. We collected a database of how 1,434 nouns are used with respect to the mass-count distinction in six languages; additional informants characterised the semantics of the underlying concepts. Results indicate only weak correlations between semantics and syntactic usage. The classification rather than being bimodal, is a graded distribution and it is similar across languages, but syntactic classes do not map onto each other, nor do they reflect, beyond weak correlations, semantic attributes of the concepts. These findings are in line with the hypothesis that much of the mass/count syntax emerges from language- and even speaker-specific grammaticalisation. Further, in chapter 3 we test the ability of a simple neural network to learn the syntactic and semantic relations of nouns, in the hope that it may throw some light on the challenges in modelling the acquisition of the mass-count syntax. It is shown that even though a simple self-organising neural network is insufficient to learn a mapping implementing a syntactic- semantic link, it does however show that the network was able to extract the concept of 'count', and to some extent that of \u2018mass\u2019 as well, without any explicit definition, from both the syntactic and from the semantic data. The second part of the thesis (part B) is dedicated to studying the properties of the Potts neural network. The Potts neural network with its adaptive dynamics represents a simplified model of cortical mechanisms. Among other cognitive phenomena, it intends to model language production by utilising the latching behaviour seen in the network. We expect that a model of language processing should robustly handle various syntactic- semantic correlations amongst the words of a language. With this aim, we test the effect on storage capacity of the Potts network when the memories stored in it share non trivial correlations. Increase in interference between stored memories due to correlations is studied along with modifications in learning rules to reduce the interference. We find that when strongly correlated memories are incorporated in the storage capacity definition, the network is able to regain its storage capacity for low sparsity. Strong correlations also affect the latching behaviour of the Potts network with the network unable to latch from one memory to another. However latching is shown to be restored by modifying the learning rule. Lastly, we look at another feature of the Potts neural network, the indication that it may exhibit spin-glass characteristics. The network is consistently shown to exhibit multiple stable degenerate energy states other than that of pure memories. This is tested for different degrees of correlations in patterns, low and high connectivity, and different levels of global and local noise. We state some of the implications that the spin-glass nature of the Potts neural network may have on language processing

    Dynamic Control of Network Level Information Processing through Cholinergic Modulation

    Full text link
    Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state within the brain. Changes in ACh are associated with changes in neural activity and information processing, though its exact role and the mechanisms through which it acts are unknown. Here I show that the dynamic changes in ACh levels that are associated with arousal state control informational processing functions of networks through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity dependent decrease in excitability, synchronizability, and neuronal resonance displayed by single cells. Using numerical modeling I develop mechanistic explanations for how control of these properties shift network activity from a stable high frequency spiking pattern to a traveling wave of activity. This transition mimics the change in brain dynamics seen between high ACh states, such as waking and Rapid Eye Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A corresponding, and related, transition in network level memory recall is also occurs as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded memories destabilize while strong memories remain stable. In levels of ACh that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This results from a competition between SFA and excitatory input strength and provides a mechanism for neural networks to control the representation of underlying synaptic information. Finally I show that during the low ACh conditions, oscillatory conditions allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is critical for the regulation of information processing tasks in neural networks. These results suggest that ACh is capable of switching networks between two distinct information processing modes. Rate coding of information is facilitated during high ACh conditions and phase coding of information is facilitated during low ACh conditions. Finally I propose that ACh levels control whether a network is in one of three functional states: (High ACh; Active waking) optimized for encoding of new information or the stable representation of relevant memories, (Mid ACh; resting state or REM) optimized for encoding connections between currently stored memories or searching the catalog of stored memories, and (Low ACh; NREM) optimized for renormalization of synaptic strength and memory consolidation. This work provides a mechanistic insight into the role of dynamic changes in ACh levels for the encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd

    Platonic model of mind as an approximation to neurodynamics

    Get PDF
    Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view

    PROCESSING INFORMATION ON INTERMEDIATE TIMESCALES WITHIN RECURRENT NEURAL NETWORKS

    Get PDF
    The cerebral cortex has remarkable computational abilities; it is able to solve prob- lems which remain beyond the most advanced man-made systems. The complexity arises due to the structure of the neural network which controls how the neurons interact. One surprising fact about this network is the dominance of ‘recurrent’ and ‘feedback’ connections. For example, only 5-10% of connections into the earliest stage of visual processing are ‘feedforward’, in that they carry information from the eyes (via the Lateral Geniculate Nucleus). One possible reason for these connec- tions is that they allow for information to be preserved within the network; the underlying ‘causes’ of sensory stimuli usually persist for much longer than the time scales of neural processing, and so understanding them requires continued aggrega- tion of information within the sensory cortices. In this dissertation, I investigate several models of such sensory processing via recurrent connections. I introduce the transient attractor network, which depends on recurrent plastic connectivity, and demonstrate in simulations how it might be involved in the processes of short term memory, signal de-noising, and temporal coherence analysis. I then show how a certain recurrent network structure might allow for transient associative learning to occur on the timescales of seconds using presynaptic facilitation. Finally, I consider how auditory scene analysis might occur through ‘gamma partitioning’. This process uses recurrent excitatory and inhibitory connections to preserve information within the neural network about its recent state, allowing for the separation of auditory sources into different perceptual cycles

    Über die Selbstorganisation einer hierarchischen GedĂ€chtnisstruktur fĂŒr kompositionelle ObjektreprĂ€sentation im visuellen Kortex

    Get PDF
    At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.GegenwĂ€rtig besteht immer noch ein enormer Abstand zwischen der LernfĂ€higkeit von kĂŒnstlichen und biologischen Informationsverarbeitungssystemen. Dieser Abstand ließe sich durch eine bessere Einsicht in die höheren Funktionen des Gehirns wie Lernen und GedĂ€chtnis verringern. Im visuellen Kortex etwa werden die Objekte innerhalb kĂŒrzester Zeit entlang der hierarchischen Verarbeitungspfade in ihre Bestandteile zerlegt und so durch eine Komposition von Elementen niedrigerer KomplexitĂ€t dargestellt. Bereits bekannte Objekte werden so aus dem LangzeitgedĂ€chtnis abgerufen und wiedererkannt. Wie eine derartige kompositionell-hierarchische GedĂ€chtnisstruktur durch die visuelle Erfahrung zustande kommen kann, ist noch weitgehend ungeklĂ€rt. Um dieser Frage nachzugehen, wird hier ein funktionelles Modell eines lernfĂ€higen rekurrenten neuronalen Netzwerkes vorgestellt. Im Netzwerk werden neuronale Mechanismen implementiert, die der kortikalen Verarbeitung und PlastizitĂ€t zugrunde liegen. Die hierarchische Architektur des Netzwerkes besteht aus zwei nacheinander geschalteten Schichten, die jede eine Anzahl von verteilten, rekurrent vernetzten Modulen beherbergen. Ein Modul umfasst dabei mehrere funktionell separate Subnetzwerke. Jedes solches Modul ist imstande, aus den eintreffenden Signalen eine geeignete ReprĂ€sentation fĂŒr den lokalen Eingaberaum unĂŒberwacht zu lernen. Die fortlaufende Verarbeitung im Netzwerk setzt sich zusammen aus diskreten Fragmenten, genannt Entscheidungszyklen, die man mit den schnellen kortikalen Rhythmen im gamma-Frequenzbereich in Verbindung setzen kann. Die Zyklen sind synchronisiert zwischen den verteilten Modulen. Innerhalb eines Zyklus wird eine lokal umgrenzte winner-take-all-Ă€hnliche Operation in Modulen durchgefĂŒhrt. Die KompetitionsstĂ€rke wĂ€chst im Laufe des Zyklus an. Diese Operation aktiviert in AbhĂ€ngigkeit von den Eingabesignalen eine sehr kleine Anzahl von Einheiten und verstĂ€rkt sie auf Kosten der anderen, um den dargebotenen Reiz in der NetzwerkaktivitĂ€t abzubilden. Ausgestattet mit adaptiven Mechanismen der bidirektionalen synaptischen PlastizitĂ€t und der homöostatischen AktivitĂ€tsregulierung, erhĂ€lt das Netzwerk natĂŒrliche Gesichtsbilder von verschiedenen Personen dargeboten. Die Bilder werden der unteren Netzwerkschicht, je ein Bild pro Zyklus, als Ansammlung von Gaborfilterantworten aus lokalen Gesichtslandmarken zugefĂŒhrt, ohne Information ĂŒber die PersonenidentitĂ€t zur VerfĂŒgung zu stellen. Im Laufe der unĂŒberwachten Lernprozedur formt das Netzwerk die Verbindungsstruktur derart, dass die Gesichter aller dargebotenen Personen im Netzwerk in Form von dĂŒnn besiedelten GedĂ€chtnisspuren abgelegt werden. Hierzu werden gleichzeitig vorwĂ€rtsgerichtete (bottom-up) und rekurrente (lateral, top-down) synaptische Verbindungen innerhalb und zwischen den Schichten gelernt. Im reifen Verbindungszustand werden infolge dieses Lernens die einzelnen Gesichter als Komposition ihrer Bestandteile auf generative Art gespeichert. Dank der generativen Art der gelernten Struktur reichen schon allein das höhere IdentitĂ€tssymbol oder eine kleine Teilmenge von zugehörigen Gesichtselementen, um alle Bestandteile der gespeicherten Gesichter aus dem GedĂ€chtnis abzurufen. In der Testphase kann das Netzwerk erfolgreich sowohl die IdentitĂ€t als auch das Geschlecht von Personen aus vorher nicht gezeigten Gesichtsansichten erkennen. Eine bemerkenswerte Eigenschaft der entstandenen GedĂ€chtnisarchitektur ist ihre FĂ€higkeit, ohne Darbietung von externen Stimuli spontan AktivitĂ€tsmuster zu generieren und die im GedĂ€chtnis abgelegten Inhalte in diesem schlafĂ€hnlichen "off-line" Regime wiederzugeben. Interessanterweise ergibt sich aus der Schlafphase ein direkter Vorteil fĂŒr die GedĂ€chtnisfunktion. Dieser Vorteil macht sich durch eine drastisch verbesserte Erkennungsrate nach der Schlafphase bemerkbar, wenn das Netwerk mit den zuvor nicht dargebotenen Ansichten von den bereits bekannten Personen konfrontiert wird. Die Leistungsverbesserung nach der Schlafphase ist umso deutlicher, je stĂ€rker die Alternativansichten vom Original abweichen. Dieser positive Effekt ist zudem komplett unabhĂ€ngig von der synapsenspezifischen PlastizitĂ€t und kann allein durch die synapsenunspezifische, homöostatische Regulation der AktivitĂ€t im Netzwerk erklĂ€rt werden. Das entwickelte Netzwerk demonstriert so eine im Bereich der neuronalen Modellierung bisher nicht gezeigte FunktionalitĂ€t. Es kann unĂŒberwacht eine GedĂ€chtnisdomĂ€ne fĂŒr kompositionelle, generative ObjektreprĂ€sentation durch die Erfahrung mit natĂŒrlichen Bildern sowohl im reizgetriebenen, wachĂ€hnlichen Zustand als auch im reizabgekoppelten, schlafĂ€hnlichen Zustand formen und verwalten. Diese FunktionalitĂ€t bietet einen vielversprechenden Ausgangspunkt fĂŒr weitere Studien, die die neuronalen Lernmechanismen des Gehirns ins Visier nehmen und letztendlich deren konsequente Umsetzung in technischen, adaptiven Systemen anstreben

    Spatial Representations in the Entorhino-Hippocampal Circuit

    Get PDF
    After a general introduction and a brief review of the available experimental data on spatial representations (chapter 2), this thesis is divided into two main parts. The first part, comprising the chapters from 3 to 6, is dedicated to grid cells. In chapter 3 we present and discuss the various models proposed for explaining grid cells formation. In chapter 4 and 5 we study our model of grid cells generation based on adaptation in the case of non-planar environments, namely in the case of a spherical environment and of three-dimensional space. In chapter 6 we propose a variant of the model where the alignment of the grid axes is induced through reciprocal inhibition, and we suggest that that the inhibitory connections obtained during this learning process can be used to implement a continuous attractor in mEC. The second part, comprising chapters from 7 to 10 is instead focused on place cell representations. In chapter 7 we analyze the differences between place cells and grid cells in terms on information content, in chapter 8 we describe the properties of attractor dynamics in our model of the Ca3 net- work, and in the following chapter we study the effects of theta oscillations on network dynamics. Finally, in Chapter 10 we analyze to what extent the learning of a new representation, can preserve the topology and the exact metric of physical space

    Structured information in small-world neural networks

    Full text link
    The retrieval abilities of spatially uniform attractor networks can be measured by the global overlap between patterns and neural states. However, we found that nonuniform networks, for instance, small-world networks, can retrieve fragments of patterns blocks without performing global retrieval. We propose a way to measure the local retrieval using a parameter that is related to the fluctuation of the block overlaps. Simulation of neural dynamics shows a competition between local and global retrieval. The phase diagram shows a transition from local retrieval to global retrieval when the storage ratio increases and the topology becomes more random. A theoretical approach confirms the simulation results and predicts that the stability of blocks can be improved by dilution.This work was supported by the MEC Grants No. TIN- 2004-04363-CO03-03, No. TIN-2007-65989 and by the CAM Grant No. S-SEM-0255-2006. E.S. was partially supported by the MEC Grant No. PR2007-0080. We thank K. Koroutchev and R. Levi for useful discussion
    • 

    corecore