94 research outputs found

    Emulating short-term synaptic dynamics with memristive devices

    Get PDF
    Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems

    A combined experimental and computational approach to investigate emergent network dynamics based on large-scale neuronal recordings

    Get PDF
    Sviluppo di un approccio integrato computazionale-sperimentale per lo studio di reti neuronali mediante registrazioni elettrofisiologich

    Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates

    Get PDF
    Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli. However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional and often incomplete data while having a power consumption on the order of a few watt are still mostly unknown. In this work, we investigate how specific functionalities emerge from simple structures observed in the mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization. Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic components of cortical networks, whose dynamics can again be described within the proposed framework. The presented models narrow the gap between well-defined, functional algorithms and their biophysical implementation, improving our understanding of the computational principles the brain might employ. Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    Closed-loop approaches for innovative neuroprostheses

    Get PDF
    The goal of this thesis is to study new ways to interact with the nervous system in case of damage or pathology. In particular, I focused my effort towards the development of innovative, closed-loop stimulation protocols in various scenarios: in vitro, ex vivo, in vivo

    Nonlinear Dynamics of Neural Circuits

    Get PDF

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Microcircuit structures of inhibitory connectivity in the rat parahippocampal gyrus

    Get PDF
    Komplexe Berechnungen im Gehirn werden durch das Zusammenspiel von exzitatorischen und hemmenden Neuronen in lokalen Netzwerken ermöglicht. In kortikalen Netzwerken, wird davon ausgegangen, dass hemmende Neurone, besonders Parvalbumin positive Korbzellen, ein „blanket of inhibition” generieren. Dieser Sichtpunkt wurde vor kurzem durch Befunde strukturierter Inhibition infrage gestellt, jedoch ist die Organisation solcher KonnektivitĂ€t noch unklar. In dieser Dissertation, prĂ€sentiere ich die Ergebnisse unserer Studie Parvabumin positiver Korbzellen, in Schichten II / III des entorhinalen Kortexes und PrĂ€subiculums der Ratte. Im entorhinalen Kortex haben wir dorsale und ventrale Korbzellen beschrieben und festgestellt, dass diese morphologisch und physiologisch Ă€hnlich, jedoch in ihrer KonnektivitĂ€t zu Prinzipalzellen dorsal stĂ€rker als ventral verbunden sind. Dieser Unterschied korreliert mit VerĂ€nderungen der Gitterzellenphysiologie. Ähnlich zeige ich im PrĂ€subiculum, dass inhibitorische KonnektivitĂ€t eine essenzielle Rolle im lokalen Netzwerk spielt. Hemmung im PrĂ€subiculum ist deutlich spĂ€rlicher ist als im entorhinalen Kortex, was ein unterschiedliches Prinzip der Netzwerkorganisation suggeriert. Um diesen Unterschied zu studieren, haben wir Morphologie und Netzwerkeigenschaften PrĂ€subiculĂ€rer Korbzellen analysiert. Prinzipalzellen werden ĂŒber ein vorherrschendes reziprokes Motif gehemmt die durch die polarisierte Struktur der Korbzellaxone ermöglicht wird. Unsere Netzwerksimulationen zeigen, dass eine polarisierte Inhibition Kopfrichtungs-Tuning verbessert. Insgesamt zeigen diese Ergebnisse, dass inhibitorische KonnektivitĂ€t, funktioneller Anforderungen der lokalen Netzwerke zur Folge, unterschiedlich strukturiert sein kann. Letztlich stelle ich die Hypothese auf, dass fĂŒr lokale inhibitorische KonnektivitĂ€t eine Abweichung von „blanket of inhibition― zur „maßgeschneiderten― Inhibition zur Lösung spezifischer computationeller Probleme vorteilhaft sein kann.Local microcircuits in the brain mediate complex computations through the interplay of excitatory and inhibitory neurons. It is generally assumed that fast-spiking parvalbumin basket cells, mediate a non-selective -blanket of inhibition-. This view has been recently challenged by reports structured inhibitory connectivity, but it’s precise organization and relevance remain unresolved. In this thesis, I present the results of our studies examining the properties of fast-spiking parvalbumin basket cells in the superficial medial entorhinal cortex and presubiculum of the rat. Characterizing these interneurons in the dorsal and ventral medial entorhinal cortex, we found basket cells of the two subregions are more likely to be connected to principal cells in the dorsal compared to the ventral region. This difference is correlated with changes in grid physiology. Our findings further indicated that inhibitory connectivity is essential for local computation in the presubiculum. Interestingly though, we found that in this region, local inhibition is lower than in the medial entorhinal cortex, suggesting a different microcircuit organizational principle. To study this difference, we analyzed the properties of fast-spiking basket cells in the presubiculum and found a characteristic spatially organized connectivity principle, facilitated by the polarized axons of the presubicular fast-spiking basket cells. Our network simulations showed that such polarized inhibition can improve head direction tuning of principal cells. Overall, our results show that inhibitory connectivity is differently organized in the medial entorhinal cortex and the presubiculum, likely due to functional requirements of the local microcircuit. As a conclusion to the studies presented in this thesis, I hypothesize that a deviation from the blanket of inhibition, towards a region-specific, tailored inhibition can provide solutions to distinct computational problems

    Functional anatomy of a visuomotor transformation in the optic tectum of zebrafish

    Get PDF
    • 

    corecore