300 research outputs found

    Integer Echo State Networks: Hyperdimensional Reservoir Computing

    Full text link
    We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl

    Learning of chunking sequences in cognition and behavior

    Get PDF
    We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson's disease and Schizophrenia

    Hierarchical Associative Memory Based on Oscillatory Neural Network

    Get PDF
    In this thesis we explore algorithms and develop architectures based on emerging nano-device technologies for cognitive computing tasks such as recognition, classification, and vision. In particular we focus on pattern matching in high dimensional vector spaces to address the nearest neighbor search problem. Recent progress in nanotechnology provides us novel nano-devices with special nonlinear response characteristics that fit cognitive tasks better than general purpose computing. We build an associative memory (AM) by weakly coupling nano-oscillators as an oscillatory neural network and design a hierarchical tree structure to organize groups of AM units. For hierarchical recognition, we first examine an architecture where image patterns are partitioned into different receptive fields and processed by individual AM units in lower levels, and then abstracted using sparse coding techniques for recognition at higher levels. A second tree structure model is developed as a more scalable AM architecture for large data sets. In this model, patterns are classified by hierarchical k-means clustering and organized in hierarchical clusters. Then the recognition process is done by comparison between the input patterns and centroids identified in the clustering process. The tree is explored in a "depth-only" manner until the closest image pattern is output. We also extend this search technique to incorporate a branch-and-bound algorithm. The models and corresponding algorithms are tested on two standard face recognition data-sets. We show that the depth-only hierarchical model is very data-set dependent and performs with 97% or 67% recognition when compared to a single large associative memory, while the branch and bound search increases time by only a factor of two compared to the depth-only search

    Recurrent Neural Networks in Computer-Based Clinical Decision Support for Laryngopathies: An Experimental Study

    Get PDF
    The main goal of this paper is to give the basis for creating a computer-based clinical decision support (CDS) system for laryngopathies. One of approaches which can be used in the proposed CDS is based on the speech signal analysis using recurrent neural networks (RNNs). RNNs can be used for pattern recognition in time series data due to their ability of memorizing some information from the past. The Elman networks (ENs) are a classical representative of RNNs. To improve learning ability of ENs, we may modify and combine them with another kind of RNNs, namely, with the Jordan networks. The modified Elman-Jordan networks (EJNs) manifest a faster and more exact achievement of the target pattern. Validation experiments were carried out on speech signals of patients from the control group and with two kinds of laryngopathies

    Über die Selbstorganisation einer hierarchischen GedĂ€chtnisstruktur fĂŒr kompositionelle ObjektreprĂ€sentation im visuellen Kortex

    Get PDF
    At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.GegenwĂ€rtig besteht immer noch ein enormer Abstand zwischen der LernfĂ€higkeit von kĂŒnstlichen und biologischen Informationsverarbeitungssystemen. Dieser Abstand ließe sich durch eine bessere Einsicht in die höheren Funktionen des Gehirns wie Lernen und GedĂ€chtnis verringern. Im visuellen Kortex etwa werden die Objekte innerhalb kĂŒrzester Zeit entlang der hierarchischen Verarbeitungspfade in ihre Bestandteile zerlegt und so durch eine Komposition von Elementen niedrigerer KomplexitĂ€t dargestellt. Bereits bekannte Objekte werden so aus dem LangzeitgedĂ€chtnis abgerufen und wiedererkannt. Wie eine derartige kompositionell-hierarchische GedĂ€chtnisstruktur durch die visuelle Erfahrung zustande kommen kann, ist noch weitgehend ungeklĂ€rt. Um dieser Frage nachzugehen, wird hier ein funktionelles Modell eines lernfĂ€higen rekurrenten neuronalen Netzwerkes vorgestellt. Im Netzwerk werden neuronale Mechanismen implementiert, die der kortikalen Verarbeitung und PlastizitĂ€t zugrunde liegen. Die hierarchische Architektur des Netzwerkes besteht aus zwei nacheinander geschalteten Schichten, die jede eine Anzahl von verteilten, rekurrent vernetzten Modulen beherbergen. Ein Modul umfasst dabei mehrere funktionell separate Subnetzwerke. Jedes solches Modul ist imstande, aus den eintreffenden Signalen eine geeignete ReprĂ€sentation fĂŒr den lokalen Eingaberaum unĂŒberwacht zu lernen. Die fortlaufende Verarbeitung im Netzwerk setzt sich zusammen aus diskreten Fragmenten, genannt Entscheidungszyklen, die man mit den schnellen kortikalen Rhythmen im gamma-Frequenzbereich in Verbindung setzen kann. Die Zyklen sind synchronisiert zwischen den verteilten Modulen. Innerhalb eines Zyklus wird eine lokal umgrenzte winner-take-all-Ă€hnliche Operation in Modulen durchgefĂŒhrt. Die KompetitionsstĂ€rke wĂ€chst im Laufe des Zyklus an. Diese Operation aktiviert in AbhĂ€ngigkeit von den Eingabesignalen eine sehr kleine Anzahl von Einheiten und verstĂ€rkt sie auf Kosten der anderen, um den dargebotenen Reiz in der NetzwerkaktivitĂ€t abzubilden. Ausgestattet mit adaptiven Mechanismen der bidirektionalen synaptischen PlastizitĂ€t und der homöostatischen AktivitĂ€tsregulierung, erhĂ€lt das Netzwerk natĂŒrliche Gesichtsbilder von verschiedenen Personen dargeboten. Die Bilder werden der unteren Netzwerkschicht, je ein Bild pro Zyklus, als Ansammlung von Gaborfilterantworten aus lokalen Gesichtslandmarken zugefĂŒhrt, ohne Information ĂŒber die PersonenidentitĂ€t zur VerfĂŒgung zu stellen. Im Laufe der unĂŒberwachten Lernprozedur formt das Netzwerk die Verbindungsstruktur derart, dass die Gesichter aller dargebotenen Personen im Netzwerk in Form von dĂŒnn besiedelten GedĂ€chtnisspuren abgelegt werden. Hierzu werden gleichzeitig vorwĂ€rtsgerichtete (bottom-up) und rekurrente (lateral, top-down) synaptische Verbindungen innerhalb und zwischen den Schichten gelernt. Im reifen Verbindungszustand werden infolge dieses Lernens die einzelnen Gesichter als Komposition ihrer Bestandteile auf generative Art gespeichert. Dank der generativen Art der gelernten Struktur reichen schon allein das höhere IdentitĂ€tssymbol oder eine kleine Teilmenge von zugehörigen Gesichtselementen, um alle Bestandteile der gespeicherten Gesichter aus dem GedĂ€chtnis abzurufen. In der Testphase kann das Netzwerk erfolgreich sowohl die IdentitĂ€t als auch das Geschlecht von Personen aus vorher nicht gezeigten Gesichtsansichten erkennen. Eine bemerkenswerte Eigenschaft der entstandenen GedĂ€chtnisarchitektur ist ihre FĂ€higkeit, ohne Darbietung von externen Stimuli spontan AktivitĂ€tsmuster zu generieren und die im GedĂ€chtnis abgelegten Inhalte in diesem schlafĂ€hnlichen "off-line" Regime wiederzugeben. Interessanterweise ergibt sich aus der Schlafphase ein direkter Vorteil fĂŒr die GedĂ€chtnisfunktion. Dieser Vorteil macht sich durch eine drastisch verbesserte Erkennungsrate nach der Schlafphase bemerkbar, wenn das Netwerk mit den zuvor nicht dargebotenen Ansichten von den bereits bekannten Personen konfrontiert wird. Die Leistungsverbesserung nach der Schlafphase ist umso deutlicher, je stĂ€rker die Alternativansichten vom Original abweichen. Dieser positive Effekt ist zudem komplett unabhĂ€ngig von der synapsenspezifischen PlastizitĂ€t und kann allein durch die synapsenunspezifische, homöostatische Regulation der AktivitĂ€t im Netzwerk erklĂ€rt werden. Das entwickelte Netzwerk demonstriert so eine im Bereich der neuronalen Modellierung bisher nicht gezeigte FunktionalitĂ€t. Es kann unĂŒberwacht eine GedĂ€chtnisdomĂ€ne fĂŒr kompositionelle, generative ObjektreprĂ€sentation durch die Erfahrung mit natĂŒrlichen Bildern sowohl im reizgetriebenen, wachĂ€hnlichen Zustand als auch im reizabgekoppelten, schlafĂ€hnlichen Zustand formen und verwalten. Diese FunktionalitĂ€t bietet einen vielversprechenden Ausgangspunkt fĂŒr weitere Studien, die die neuronalen Lernmechanismen des Gehirns ins Visier nehmen und letztendlich deren konsequente Umsetzung in technischen, adaptiven Systemen anstreben
    • 

    corecore