54 research outputs found
Resistive communications based on neuristors
Memristors are passive elements that allow us to store information using a
single element per bit. However, this is not the only utility of the memristor.
Considering the physical chemical structure of the element used, the memristor
can function at the same time as memory and as a communication unit. This paper
presents a new approach to the use of the memristor and develops the concept of
resistive communication
Astrocyte control bursting mode of spiking neuron network with memristor-implemented plasticity
A mathematical model of a spiking neuron network accompanied by astrocytes is
considered. The network is composed of excitatory and inhibitory neurons with
synaptic connections supplied by a memristor-based model of plasticity. Another
mechanism for changing the synaptic connections involves astrocytic regulations
using the concept of tripartite synapses. In the absence of memristor-based
plasticity, the connections between these neurons drive the network dynamics
into a burst mode, as observed in many experimental neurobiological studies
when investigating living networks in neuronal cultures. The memristive
plasticity implementing synaptic plasticity in inhibitory synapses results in a
shift in network dynamics towards an asynchronous mode. Next,it is found that
accounting for astrocytic regulation in glutamatergic excitatory synapses
enable the restoration of 'normal' burst dynamics. The conditions and
parameters of such astrocytic regulation's impact on burst dynamics
established
Neuro-memristive Circuits for Edge Computing: A review
The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing
Emulation of Neural Dynamics in Neuromorphic Circuits Based on Memristive Devices
The most impressive properties of the human brain are widely acknowledged as being perception and consciousness. While the underlying mechanisms are not yet understood, it is very likely that neural dynamics, in connection with the topology of neural networks, may play a decisive role. Neuromorphic systems offer an interesting approach to emulate and model these processes, as they allow the complexity of neural networks to be mapped onto energy-efficient and real-time capable systems. For this purpose, analogue electrical circuits that are oriented as closely as possible to biological networks are investigated. Electronic devices are particularly important for this purpose, as they make it possible to emulate the mechanisms that are important to the learning and memory processes that occur at the connections of neurons in form of synapses. In this context, it has been shown that nano-ionic mechanisms, in socalled memristive devices, allow the emulation of synaptic plasticity on a descriptive level within a single device. Memristive devices are passive, non-volatile components whose resistance value depends on the applied electrical potentials. In recent years, the important plasticity mechanisms of synaptic information-processing have been emulated using memristive devices. The importance of memristive devices in terms of emulating dynamic processes within novel bio-inspired computing schemes attract worldwide interest and is the subject of this thesis
Information Transfer in Neuronal Circuits: From Biological Neurons to Neuromorphic Electronics
The advent of neuromorphic electronics is increasingly revolutionizing the concept of computation. In the last decade, several studies have shown how materials, architectures, and neuromorphic devices can be leveraged to achieve brain-like computation with limited power consumption and high energy efficiency. Neuromorphic systems have been mainly conceived to support spiking neural networks that embed bioinspired plasticity rules such as spike time-dependent plasticity to potentially support both unsupervised and supervised learning. Despite substantial progress in the field, the information transfer capabilities of biological circuits have not yet been achieved. More importantly, demonstrations of the actual performance of neuromorphic systems in this context have never been presented. In this paper, we report similarities between biological, simulated, and artificially reconstructed microcircuits in terms of information transfer from a computational perspective. Specifically, we extensively analyzed the mutual information transfer at the synapse between mossy fibers and granule cells by measuring the relationship between pre- and post-synaptic variability. We extended this analysis to memristor synapses that embed rate-based learning rules, thus providing quantitative validation for neuromorphic hardware and demonstrating the reliability of brain-inspired applications
Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a
simple spiking neuron model that performs statistical inference and
unsupervised learning of spatiotemporal spike patterns. SKAN is the first
proposed neuron model to investigate the effects of dynamic synapto-dendritic
kernels and demonstrate their computational power even at the single neuron
scale. The rule-set defining the neuron is simple there are no complex
mathematical operations such as normalization, exponentiation or even
multiplication. The functionalities of SKAN emerge from the real-time
interaction of simple additive and binary processes. Like a biological neuron,
SKAN is robust to signal and parameter noise, and can utilize both in its
operations. At the network scale neurons are locked in a race with each other
with the fastest neuron to spike effectively hiding its learnt pattern from its
neighbors. The robustness to noise, high speed and simple building blocks not
only make SKAN an interesting neuron model in computational neuroscience, but
also make it ideal for implementation in digital and analog neuromorphic
systems which is demonstrated through an implementation in a Field Programmable
Gate Array (FPGA).Comment: In submission to Frontiers in Neuroscienc
- …