16 research outputs found

    Neuron-like Signal Propagation for OWC Nanonetworks

    Get PDF
    Neuron-inspired signal propagation is proposed for communication in networks of nanodevices. Nanodevices should be able to interpret and forward signals inside the network in order to transport the information between two endpoints. Applications at the nano level demand processing systems that are very power efficient and simple. To achieve that, a brain inspired spiking neural network with pattern recognition and relaying capabilities is presented. The neural network learns the desired features using STDP, a power efficient and biologically plausible learning method. Finally, several nanonetworks are simulated, communicating using OWC. The results obtained show that signal similarity between the emitted and received signal highly depends on the design space of the neurons. It is possible to create networks with NDs capable of transporting information between two endpoints

    Semiconductor Memory Devices for Hardware-Driven Neuromorphic Systems

    Get PDF
    This book aims to convey the most recent progress in hardware-driven neuromorphic systems based on semiconductor memory technologies. Machine learning systems and various types of artificial neural networks to realize the learning process have mainly focused on software technologies. Tremendous advances have been made, particularly in the area of data inference and recognition, in which humans have great superiority compared to conventional computers. In order to more effectively mimic our way of thinking in a further hardware sense, more synapse-like components in terms of integration density, completeness in realizing biological synaptic behaviors, and most importantly, energy-efficient operation capability, should be prepared. For higher resemblance with the biological nervous system, future developments ought to take power consumption into account and foster revolutions at the device level, which can be realized by memory technologies. This book consists of seven articles in which most recent research findings on neuromorphic systems are reported in the highlights of various memory devices and architectures. Synaptic devices and their behaviors, many-core neuromorphic platforms in close relation with memory, novel materials enabling the low-power synaptic operations based on memory devices are studied, along with evaluations and applications. Some of them can be practically realized due to high Si processing and structure compatibility with contemporary semiconductor memory technologies in production, which provides perspectives of neuromorphic chips for mass production

    Development of cue integration with reward-mediated learning

    Get PDF
    This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work

    A unifying functional approach towards synaptic long-term plasticity

    Get PDF
    Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke. Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen. Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten. Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Contributions of synaptic filters to models of synaptically stored memory

    No full text
    The question of how neural systems encode memories in one-shot without immediately disrupting previously stored information has puzzled theoretical neuroscientists for years and it is the central topic of this thesis. Previous attempts on this topic, have proposed that synapses probabilistically update in response to plasticity inducing stimuli to effectively delay the degradation of old memories in the face of ongoing memory storage. Indeed, experiments have shown that synapses do not immediately respond to plasticity inducing stimuli, since these must be presented many times before synaptic plasticity is expressed. Such a delay could be due to the stochastic nature of synaptic plasticity or perhaps because induction signals are integrated before overt strength changes occur.The later approach has been previously applied to control fluctuations in neural development by low-pass filtering induction signals before plasticity is expressed. In this thesis we consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals to a threshold before expressing plasticity. We report novel recall dynamics and considerable improvements in memory lifetimes against a prominent model of synaptically stored memory. With integrating synapses the memory trace initially rises before reaching a maximum and then falls. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. Furthermore, we find that integrating synapses possess natural timescales that can be used to consider the transition to late-phase plasticity under spaced repetition patterns known to lead to optimal storage conditions. We find that threshold crossing statistics differentiate between massed and spaced memory repetition patterns. However, isolated integrative synapses obtain an insufficient statistical sample to detect the stimulation pattern within a few memory repetitions. We extend the modelto consider the cooperation of well-known intracellular signalling pathways in detecting storage conditions by utilizing the profile of postsynaptic depolarization. We find that neuron wide signalling and local synaptic signals can be combined to detect optimal storage conditions that lead to stable forms of plasticity in a synapse specific manner.These models can be further extended to consider heterosynaptic and neuromodulatory interactions for late-phase plasticity.<br/

    Intelligence artificielle et robotique bio-inspirée : modélisation de fonctions d'apprentissage par réseaux de neurones à impulsions

    Get PDF
    Cette thèse a comme objectif de permettre une avancée originale dans le domaine de l'informatique cognitive, plus précisément en robotique bio-inspirée. L'hypothèse défendue est qu'il est possible d'intégrer différentes fonctions d'apprentissage, élaborées et incarnées pour des robots virtuels et physiques, à un même paradigme de réseaux de neurones à impulsions agissant comme cerveaux-contrôleurs. La conception de règles d'apprentissage et la validation de l'hypothèse de recherche reposent sur la simulation de mécanismes cellulaires à base de plasticité synaptique et sur la reproduction de comportements adaptatifs des robots. Cette thèse par articles cible trois types d'apprentissage de complexité incrémentale : l'habituation comme forme d'apprentissage non associatif et les conditionnements classiques et opérants comme formes d'apprentissage associatif. L'analyse détaillée, de la synapse au comportement, et validée par des études expérimentales provenant d'invertébrés tels que le ver nématode Caenorhabditis elegans. Pour chacune de ces règles, un algorithme novateur a été proposé, conduisant à la publication d'un article scientifique. Ces règles d'apprentissage ont été modélisées en développant certains paramètres temporels et des circuits neuronaux précis. Incidemment, la granularité du temps des réseaux de neurones à impulsions (RNAI) est établie au niveau du simple potentiel d'action plutôt qu'au niveau du taux moyen de décharge par unité de temps, comme c'est le cas pour les réseaux de neurones artificiels traditionnels. Cette propriété des RNAI s'est avérée être un atout suffisant pour préférer leur utilisation pour des robots évoluant dans le monde réel. L'élaboration du modèle computationnel d'apprentissage pour des robots a requis de tester d'abord les hypothèses sur des simulations virtuelles. Puisqu'aucun simulateur n'avait les capacités suffisantes pour tester notre hypothèse, soit d'intégrer des RNAI, des structures de robots, et des interfaces pour l'exportation des RNAI vers des plateformes physiques et des environnements virtuels 3D suffisamment complexes, il a été nécessaire de développer, en parallèle de la thèse, un logiciel novateur (SIMCOG), permettant une étude analytique par le suivi dynamique des variables, des synapses de RNAI jusqu'aux comportements d'un ou plusieurs robots virtuels ou physiques. Finalement, outre l'intégration de plusieurs fonctions différentes d'apprentissage dans des RNAI, une autre des conclusions de ce travail suggère que des robots virtuels et physiques peuvent apprendre et s'adapter au niveau comportemental, de façon similaire aux agents naturels. Ces observations comportementales sont basées sur la simulation de mécanismes de plasticité synaptique modulés par des variables temporelles relatives aux stimuli physiques et aux activités cellulaires neuronales.\ud ______________________________________________________________________________ \ud MOTS-CLÉS DE L’AUTEUR : Intelligence artificielle, Cognition, Simulateur, Robotique bio-inspirée, Réseaux de neurones artificiels à impulsions, Apprentissage, Habituation, Conditionnement classique, Conditionnement opérant, Plasticité synaptiqu

    Dissecting amygdala cell types in fear and extinction

    Get PDF
    The mammalian brain consists of billions of neurons; Individual neurons serve as building blocks (Cajal 1911, translation Swanson and Swanson 1995). However, studying individual neurons is simply insufficient to understand how the brain works. A promising approach is the cell-type-specific approach, an effort to classify neurons that perform the same function as a single cell type (functional definition, see (Luo et al., 2008)) and to understand their roles in information processing and behavioral outputs. Nevertheless, the limitation of this definition is that we barely know the precise functions or roles of neurons, and even in very well-characterized neurons such as retinal ganglion cells, there would likely be remaining unknown functions. Thus, as an operational definition to drive neuroscience forward, defining cell types using genetic tools that allow us to access specific subsets of neurons was suggested and widely accepted in an almost implicit manner. This consensus is based on an optimistic view that, at some point, the operational genetic definition and the ultimate functional definition would converge. In this thesis, having this philosophy in mind, I try to match several operationally defined amygdala cell types with their distinct functions/roles in the context of fear and extinction learning. In Project 1, I demonstrate that a cell-type in the amygdala complex defined by molecular marker expression exerts essential functions in fear and extinction by composing a unique mutual inhibition circuit motif. In Project 2, I find that a cell-type in the basolateral amygdala defined by di-synaptic downstream target show unprecedented functional specificity in fear learning. Finally, in Project 3, I aim to characterize functions and roles of cell types in the basolateral amygdala defined by dynamic, neuronal activity-dependent gene expression upon learning. Collectively, this thesis serves as an important stepping stone to achieving the convergence between definitions of a cell type

    Calcium dynamics in dendrites and spines of spiny neurons in the somatosensory ‘barrel’ cortex of the rat

    Get PDF
    Two-photon excitation fluorescence microscopy was combined with the patch-clamp technique to study the Ca2+ dynamics in dendrites and spines of spiny neurons of layer 4 of the somatosensory cortex in acute thalamocortical brain slices of young (P13-P15) rats. Back-propagating action potentials (bAPs) resulted in a transient rise in Ca2+ in all dendrites and spines tested, representing a global intracellular chemical signal about the activity of the cell. In contrast, synaptically evoked excitatory postsynaptic potentials (EPSPs) resulted in a synapse specific, local increase in Ca2+. Pairing both stimuli at different inter-stimulus intervals revealed a precisely tuned coincidence detection mechanism for pre- and postsynaptic activity, coded in the peak Ca2+ transient amplitude. Linear, sub- and supralinear summation of the Ca2+ transients, depending on the time interval and the order of bAP and EPSP, was found. Ca2+ influx was maximal when the action potential followed synaptic stimulation within less than 20 ms. The mechanism of maximal Ca2+ influx could be explained by the properites of the NMDA receptor channel, which was activated by binding glutamate during synaptic stimulation and subsequent relief of the Mg2+ block by the bAP. Coincidence detection was restricted to the synaptic contact and it did not depend on the distance of the contact from the soma. This temporally and spatially highly restricted coincidence detection mechanism, which emplyed the Ca2+ transient amplitude as a readout signal might serve as an input specific trigger for spike-timing dependent plasticity. Indeed potentiation of EPSPs to 150% of the baseline amplitude could be induced by pairing extracellular stimulation with bAPs within the coincidence detection interval. Reversing the order of the stimuli resulted in depression of the EPSP amplitude to 70%. Thus it was concluded that spiny neurons in layer 4 of the juvenile rat barrel cortex exhibit spike-timing dependent plasticity, which corresponded well to the Ca2+ code used by their spines for coincidence detection
    corecore