4 research outputs found

    Neuromorphic Computing Applications in Robotics

    Get PDF
    Deep learning achieves remarkable success through training using massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer from the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationships between events and objects. This learning paradigm is referred to as associative learning. The successful implementation of associative learning imitates self-learning schemes analogous to animals which resolve the challenges of deep learning. Current state-of-the-art implementations of associative memory are limited to simulations with small-scale and offline paradigms. Thus, this work implements associative memory with an Unmanned Ground Vehicle (UGV) and neuromorphic hardware, specifically Intel’s Loihi, for an online learning scenario. This system emulates the classic associative learning in rats using the UGV in place of the rats. In specific, it successfully reproduces the fear conditioning with no pretraining procedure or labeled datasets. The UGV is rendered capable of autonomously learning the cause-and-effect relationship of the light stimulus and vibration stimulus and exhibiting a movement response to demonstrate the memorization. Hebbian learning dynamics are used to update the synaptic weights during the associative learning process. The Intel Loihi chip is integrated with this online learning system for processing visual signals with a specialized neural assembly. While processing, the Loihi’s average power usages for computing logic and memory are 30 mW and 29 mW, respectively

    The Integration of Neuromorphic Computing in Autonomous Robotic Systems

    Get PDF
    Deep Neural Networks (DNNs) have come a long way in many cognitive tasks by training on large, labeled datasets. However, this method has problems in places with limited data and energy, like when planetary robots are used or when edge computing is used [1]. In contrast to this data-heavy approach, animals demonstrate an innate ability to learn by communicating with their environment and forming associative memories among events and entities, a process known as associative learning [2-4]. For instance, rats in a T-maze learn to associate different stimuli with outcomes through exploration without needing labeled data [5]. This learning paradigm is crucial to overcoming the challenges of deep learning in environments where data and energy are limited. Taking inspiration from this natural learning process, recent advancements [6, 7] have been made in implementing associative learning in artificial systems. This work introduces a pioneering approach by integrating associative learning utilizing an Unmanned Ground Vehicle (UGV) in conjunction with neuromorphic hardware, specifically the XyloA2TestBoard from SynSense, to facilitate online learning scenarios. The system simulates standard associative learning, like the spatial and memory learning observed in rats in a T-maze environment, without any pretraining or labeled datasets. The UGV, akin to the rats in a T-maze, autonomously learns the cause-and-effect relationships between different stimuli, such as visual cues and vibration or audio and visual cues, and demonstrates learned responses through movement. The neuromorphic robot in this system, equipped with SynSense’s neuromorphic chip, processes audio signals with a specialized Spiking Neural Network (SNN) and neural assembly, employing the Hebbian learning rule to adjust synaptic weights throughout the learning period. The XyloA2TestBoard uses little power (17.96 µW on average for logic Analog Front End (AFE) and 213.94 µW for IO circuitry), which shows that neuromorphic chips could work well in places with limited energy, offering a promising direction for advancing associative learning in artificial systems
    corecore