17 research outputs found

    Neuromorphic Computing Applications in Robotics

    Get PDF
    Deep learning achieves remarkable success through training using massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer from the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationships between events and objects. This learning paradigm is referred to as associative learning. The successful implementation of associative learning imitates self-learning schemes analogous to animals which resolve the challenges of deep learning. Current state-of-the-art implementations of associative memory are limited to simulations with small-scale and offline paradigms. Thus, this work implements associative memory with an Unmanned Ground Vehicle (UGV) and neuromorphic hardware, specifically Intel’s Loihi, for an online learning scenario. This system emulates the classic associative learning in rats using the UGV in place of the rats. In specific, it successfully reproduces the fear conditioning with no pretraining procedure or labeled datasets. The UGV is rendered capable of autonomously learning the cause-and-effect relationship of the light stimulus and vibration stimulus and exhibiting a movement response to demonstrate the memorization. Hebbian learning dynamics are used to update the synaptic weights during the associative learning process. The Intel Loihi chip is integrated with this online learning system for processing visual signals with a specialized neural assembly. While processing, the Loihi’s average power usages for computing logic and memory are 30 mW and 29 mW, respectively

    Memristor-Based HTM Spatial Pooler with On-Device Learning for Pattern Recognition

    Get PDF
    This article investigates hardware implementation of hierarchical temporal memory (HTM), a brain-inspired machine learning algorithm that mimics the key functions of the neocortex and is applicable to many machine learning tasks. Spatial pooler (SP) is one of the main parts of HTM, designed to learn the spatial information and obtain the sparse distributed representations (SDRs) of input patterns. The other part is temporal memory (TM) which aims to learn the temporal information of inputs. The memristor, which is an appropriate synapse emulator for neuromorphic systems, can be used as the synapse in SP and TM circuits. In this article, a memristor-based SP (MSP) circuit structure is designed to accelerate the execution of the SP algorithm. The presented MSP has properties of modeling both the synaptic permanence and the synaptic connection state within a single synapse, and on-device and parallel learning. Simulation results of statistic metrics and classification tasks on several real-world datasets substantiate the validity of MSP

    Analog Spiking Neuromorphic Circuits and Systems for Brain- and Nanotechnology-Inspired Cognitive Computing

    Get PDF
    Human society is now facing grand challenges to satisfy the growing demand for computing power, at the same time, sustain energy consumption. By the end of CMOS technology scaling, innovations are required to tackle the challenges in a radically different way. Inspired by the emerging understanding of the computing occurring in a brain and nanotechnology-enabled biological plausible synaptic plasticity, neuromorphic computing architectures are being investigated. Such a neuromorphic chip that combines CMOS analog spiking neurons and nanoscale resistive random-access memory (RRAM) using as electronics synapses can provide massive neural network parallelism, high density and online learning capability, and hence, paves the path towards a promising solution to future energy-efficient real-time computing systems. However, existing silicon neuron approaches are designed to faithfully reproduce biological neuron dynamics, and hence they are incompatible with the RRAM synapses, or require extensive peripheral circuitry to modulate a synapse, and are thus deficient in learning capability. As a result, they eliminate most of the density advantages gained by the adoption of nanoscale devices, and fail to realize a functional computing system. This dissertation describes novel hardware architectures and neuron circuit designs that synergistically assemble the fundamental and significant elements for brain-inspired computing. Versatile CMOS spiking neurons that combine integrate-and-fire, passive dense RRAM synapses drive capability, dynamic biasing for adaptive power consumption, in situ spike-timing dependent plasticity (STDP) and competitive learning in compact integrated circuit modules are presented. Real-world pattern learning and recognition tasks using the proposed architecture were demonstrated with circuit-level simulations. A test chip was implemented and fabricated to verify the proposed CMOS neuron and hardware architecture, and the subsequent chip measurement results successfully proved the idea. The work described in this dissertation realizes a key building block for large-scale integration of spiking neural network hardware, and then, serves as a step-stone for the building of next-generation energy-efficient brain-inspired cognitive computing systems

    Implementation of Associative Memory Learning in Mobile Robots Using Neuromorphic Computing

    Get PDF
    Fear conditioning is a behavioral paradigm of learning to predict aversive events. It is a form of associative learning that memorizes an undesirable stimulus (e.g., an electrical shock) and a neutral stimulus (e.g., a tone), resulting in a fear response (such as running away) to the originally neutral stimulus. The association of concurrent events is implemented by strengthening the synaptic connection between the neurons. In this paper, with an analogous methodology, we reproduce the classic fear conditioning experiment of rats using mobile robots and a neuromorphic system. In our design, the acceleration from a vibration platform substitutes the undesirable stimulus in rats. Meanwhile, the brightness of light (dark vs. light) is used for a neutral stimulus, which is analogous to the neutral sound in fear conditioning experiments in rats. The brightness of the light is processed with sparse coding in the Intel Loihi chip. The simulation and experimental results demonstrate that our neuromorphic robot successfully, for the first time, reproduces the fear conditioning experiment of rats with a mobile robot. The work exhibits a potential online learning paradigm with no labeled data required. The mobile robot directly memorizes the events by interacting with its surroundings, essentially different from data-driven methods

    The Integration of Neuromorphic Computing in Autonomous Robotic Systems

    Get PDF
    Deep Neural Networks (DNNs) have come a long way in many cognitive tasks by training on large, labeled datasets. However, this method has problems in places with limited data and energy, like when planetary robots are used or when edge computing is used [1]. In contrast to this data-heavy approach, animals demonstrate an innate ability to learn by communicating with their environment and forming associative memories among events and entities, a process known as associative learning [2-4]. For instance, rats in a T-maze learn to associate different stimuli with outcomes through exploration without needing labeled data [5]. This learning paradigm is crucial to overcoming the challenges of deep learning in environments where data and energy are limited. Taking inspiration from this natural learning process, recent advancements [6, 7] have been made in implementing associative learning in artificial systems. This work introduces a pioneering approach by integrating associative learning utilizing an Unmanned Ground Vehicle (UGV) in conjunction with neuromorphic hardware, specifically the XyloA2TestBoard from SynSense, to facilitate online learning scenarios. The system simulates standard associative learning, like the spatial and memory learning observed in rats in a T-maze environment, without any pretraining or labeled datasets. The UGV, akin to the rats in a T-maze, autonomously learns the cause-and-effect relationships between different stimuli, such as visual cues and vibration or audio and visual cues, and demonstrates learned responses through movement. The neuromorphic robot in this system, equipped with SynSense’s neuromorphic chip, processes audio signals with a specialized Spiking Neural Network (SNN) and neural assembly, employing the Hebbian learning rule to adjust synaptic weights throughout the learning period. The XyloA2TestBoard uses little power (17.96 µW on average for logic Analog Front End (AFE) and 213.94 µW for IO circuitry), which shows that neuromorphic chips could work well in places with limited energy, offering a promising direction for advancing associative learning in artificial systems

    Homeostatic Fault Tolerance in Spiking Neural Networks : A Dynamic Hardware Perspective

    Get PDF
    Fault tolerance is a remarkable feature of biological systems and their self-repair capability influence modern electronic systems. In this paper, we propose a novel plastic neural network model, which establishes homeostasis in a spiking neural network. Combined with this plasticity and the inspiration from inhibitory interneurons, we develop a fault-resilient robotic controller implemented on an FPGA establishing obstacle avoidance task. We demonstrate the proposed methodology on a spiking neural network implemented on Xilinx Artix-7 FPGA. The system is able to maintain stable firing (tolerance ±10%) with a loss of up to 75% of the original synaptic inputs to a neuron. Our repair mechanism has minimal hardware overhead with a tuning circuit (repair unit) which consumes only three slices/neuron for implementing a threshold voltage-based homeostatic fault-tolerant unit. The overall architecture has a minimal impact on power consumption and, therefore, supports scalable implementations. This paper opens a novel way of implementing the behavior of natural fault tolerant system in hardware establishing homeostatic self-repair behavior

    Low Power Memory/Memristor Devices and Systems

    Get PDF
    This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within

    Advanced CMOS Integrated Circuit Design and Application

    Get PDF
    The recent development of various application systems and platforms, such as 5G, B5G, 6G, and IoT, is based on the advancement of CMOS integrated circuit (IC) technology that enables them to implement high-performance chipsets. In addition to development in the traditional fields of analog and digital integrated circuits, the development of CMOS IC design and application in high-power and high-frequency operations, which was previously thought to be possible only with compound semiconductor technology, is a core technology that drives rapid industrial development. This book aims to highlight advances in all aspects of CMOS integrated circuit design and applications without discriminating between different operating frequencies, output powers, and the analog/digital domains. Specific topics in the book include: Next-generation CMOS circuit design and application; CMOS RF/microwave/millimeter-wave/terahertz-wave integrated circuits and systems; CMOS integrated circuits specially used for wireless or wired systems and applications such as converters, sensors, interfaces, frequency synthesizers/generators/rectifiers, and so on; Algorithm and signal-processing methods to improve the performance of CMOS circuits and systems
    corecore