49 research outputs found

    A memristive non-smooth dynamical system with coexistence of bimodule periodic oscillation

    Get PDF
    © 2022 Elsevier GmbH. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.aeue.2022.154279In order to explore the bursting oscillations and the formation mechanism of memristive non-smooth systems, a third-order memristor model and an external periodic excitation are introduced into a non-smooth dynamical system, and a novel 4D memristive non-smooth system with two-timescale is established. The system is divided into two different subsystems by a non-smooth interface, which can be used to simulate the scenario where a memristor encounters a non-smooth circuit in practical application circuits. Three different bursting patterns and bifurcation mechanisms are analyzed with the time series, the corresponding phase portraits, the equilibrium bifurcation diagrams, and the transformed phase portraits. It is pointed that not only the stability of the equilibrium trajectory but also the non-smooth interface may influence the bursting phenomenon, resulting in the sudden jumping of the trajectory and non-smooth bifurcation at the non-smooth interface. In particular, the coexistence of bimodule periodic oscillations at the non-smooth interface can be observed in this system. Finally, the correctness of the theoretical analysis is well verified by the numerical simulation and Multisim circuit simulation. This paper is of great significance for the future analysis and engineering application of the memristor in non-smooth circuits.Peer reviewe

    A Hybrid CMOS-Memristor Spiking Neural Network Supporting Multiple Learning Rules

    Get PDF
    Artificial intelligence (AI) is changing the way computing is performed to cope with real-world, ill-defined tasks for which traditional algorithms fail. AI requires significant memory access, thus running into the von Neumann bottleneck when implemented in standard computing platforms. In this respect, low-latency energy-efficient in-memory computing can be achieved by exploiting emerging memristive devices, given their ability to emulate synaptic plasticity, which provides a path to design large-scale brain-inspired spiking neural networks (SNNs). Several plasticity rules have been described in the brain and their coexistence in the same network largely expands the computational capabilities of a given circuit. In this work, starting from the electrical characterization and modeling of the memristor device, we propose a neuro-synaptic architecture that co-integrates in a unique platform with a single type of synaptic device to implement two distinct learning rules, namely, the spike-timing-dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM). This architecture, by exploiting the aforementioned learning rules, successfully addressed two different tasks of unsupervised learning

    Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels

    Get PDF
    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively hiding its learnt pattern from its neighbors. The robustness to noise, high speed and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA).Comment: In submission to Frontiers in Neuroscienc

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Imperfect chimera and synchronization in a hybrid adaptive conductance based exponential integrate and fire neuron model

    Get PDF
    In this study, the hybrid conductance-based adaptive exponential integrate and fire (CadEx) neuron model is proposed to determine the effect of magnetic flux on conductance-based neurons. To begin with, bifurcation analysis is carried out in relation to the input current, resetting parameter, and adaptation time constant in order to comprehend dynamical transitions. We exemplify that the existence of period-1, period-2, and period-4 cycles depends on the magnitude of input current via period doubling and period halving bifurcations. Furthermore, the presence of chaotic behavior is discovered by varying the adaptation time constant via the period doubling route. Following that, we examine the network behavior of CadEx neurons and discover the presence of a variety of dynamical behaviors such as desynchronization, traveling chimera, traveling wave, imperfect chimera, and synchronization. The appearance of synchronization is especially noticeable when the magnitude of the magnetic flux coefficient or the strength of coupling strength is increased. As a result, achieving synchronization in CadEx is essential for neuron activity, which can aid in the realization of such behavior during many cognitive processes

    Investigation of Synapto-dendritic Kernel Adapting Neuron models and their use in spiking neuromorphic architectures

    Get PDF
    The motivation for this thesis is idea that abstract, adaptive, hardware efficient, inter-neuronal transfer functions (or kernels) which carry information in the form of postsynaptic membrane potentials, are the most important (and erstwhile missing) element in neuromorphic implementations of Spiking Neural Networks (SNN). In the absence of such abstract kernels, spiking neuromorphic systems must realize very large numbers of synapses and their associated connectivity. The resultant hardware and bandwidth limitations create difficult tradeoffs which diminish the usefulness of such systems. In this thesis a novel model of spiking neurons is proposed. The proposed Synapto-dendritic Kernel Adapting Neuron (SKAN) uses the adaptation of their synapto-dendritic kernels in conjunction with an adaptive threshold to perform unsupervised learning and inference on spatio-temporal spike patterns. The hardware and connectivity requirements of the neuron model are minimized through the use of simple accumulator-based kernels as well as through the use of timing information to perform a winner take all operation between the neurons. The learning and inference operations of SKAN are characterized and shown to be robust across a range of noise environments. Next, the SKAN model is augmented with a simplified hardware-efficient model of Spike Timing Dependent Plasticity (STDP). In biology STDP is the mechanism which allows neurons to learn spatio-temporal spike patterns. However when the proposed SKAN model is augmented with a simplified STDP rule, where the synaptic kernel is used as a binary flag that enable synaptic potentiation, the result is a synaptic encoding of afferent Signal to Noise Ratio (SNR). In this combined model the neuron not only learns the target spatio-temporal spike patterns but also weighs each channel independently according to its signal to noise ratio. Additionally a novel approach is presented to achieving homeostatic plasticity in digital hardware which reduces hardware cost by eliminating the need for multipliers. Finally the behavior and potential utility of this combined model is investigated in a range of noise conditions and the digital hardware resource utilization of SKAN and SKAN + STDP is detailed using Field Programmable Gate Arrays (FPGA)

    In-memory computing with emerging memory devices: Status and outlook

    Get PDF
    Supporting data for "In-memory computing with emerging memory devices: status and outlook", submitted to APL Machine Learning
    corecore