190 research outputs found

    Neuromorphic Computing Applications in Robotics

    Get PDF
    Deep learning achieves remarkable success through training using massively labeled datasets. However, the high demands on the datasets impede the feasibility of deep learning in edge computing scenarios and suffer from the data scarcity issue. Rather than relying on labeled data, animals learn by interacting with their surroundings and memorizing the relationships between events and objects. This learning paradigm is referred to as associative learning. The successful implementation of associative learning imitates self-learning schemes analogous to animals which resolve the challenges of deep learning. Current state-of-the-art implementations of associative memory are limited to simulations with small-scale and offline paradigms. Thus, this work implements associative memory with an Unmanned Ground Vehicle (UGV) and neuromorphic hardware, specifically Intel’s Loihi, for an online learning scenario. This system emulates the classic associative learning in rats using the UGV in place of the rats. In specific, it successfully reproduces the fear conditioning with no pretraining procedure or labeled datasets. The UGV is rendered capable of autonomously learning the cause-and-effect relationship of the light stimulus and vibration stimulus and exhibiting a movement response to demonstrate the memorization. Hebbian learning dynamics are used to update the synaptic weights during the associative learning process. The Intel Loihi chip is integrated with this online learning system for processing visual signals with a specialized neural assembly. While processing, the Loihi’s average power usages for computing logic and memory are 30 mW and 29 mW, respectively

    Short-term plasticity as cause-effect hypothesis testing in distal reward learning

    Get PDF
    Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity as to which stimuli, actions, and rewards are causally related. Only the repetition of reward episodes helps distinguish true cause-effect relationships from coincidental occurrences. In the model proposed here, a novel plasticity rule employs short and long-term changes to evaluate hypotheses on cause-effect relationships. Transient weights represent hypotheses that are consolidated in long-term memory only when they consistently predict or cause future rewards. The main objective of the model is to preserve existing network topologies when learning with ambiguous information flows. Learning is also improved by biasing the exploration of the stimulus-response space towards actions that in the past occurred before rewards. The model indicates under which conditions beliefs can be consolidated in long-term memory, it suggests a solution to the plasticity-stability dilemma, and proposes an interpretation of the role of short-term plasticity.Comment: Biological Cybernetics, September 201

    Olfactory learning in Drosophila

    Get PDF
    Animals are able to form associative memories and benefit from past experience. In classical conditioning an animal is trained to associate an initially neutral stimulus by pairing it with a stimulus that triggers an innate response. The neutral stimulus is commonly referred to as conditioned stimulus (CS) and the reinforcing stimulus as unconditioned stimulus (US). The underlying neuronal mechanisms and structures are an intensely investigated topic. The fruit fly Drosophila melanogaster is a prime model animal to investigate the mechanisms of learning. In this thesis we propose fundamental circuit motifs that explain aspects of aversive olfactory learning as it is observed in the fruit fly. Changing parameters of the learning paradigm affects the behavioral outcome in different ways. The relative timing between CS and US affects the hedonic value of the CS. Reversing the order changes the behavioral response from conditioned avoidance to conditioned approach. We propose a timing-dependent biochemical reaction cascade, which can account for this phenomenon. In addition to form odor-specific memories, flies are able to associate a specific odor intensity. In aversive olfactory conditioning they show less avoidance to lower and higher intensities of the same odor. However the layout of the first two olfactory processing layers does not support this kind of learning due to a nested representation of odor intensity. We propose a basic circuit motif that transforms the nested monotonic intensity representation to a non-monotonic representation that supports intensity specific learning. Flies are able to bridge a stimulus free interval between CS and US to form an association. It is unclear so far where the stimulus trace of the CS is represented in the fly's nervous system. We analyze recordings from the first three layers of olfactory processing with an advanced machine learning approach. We argue that third order neurons are likely to harbor the stimulus trace

    From Biological Synapses to "Intelligent" Robots

    Get PDF
    This selective review explores biologically inspired learning as a model for intelligent robot control and sensing technology on the basis of specific examples. Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence, as explained on the basis of examples from the highly plastic biological neural networks of invertebrates and vertebrates. Its potential for adaptive learning and control without supervision, the generation of functional complexity, and control architectures based on self-organization is brought forward. Learning without prior knowledge based on excitatory and inhibitory neural mechanisms accounts for the process through which survival-relevant or task-relevant representations are either reinforced or suppressed. The basic mechanisms of unsupervised biological learning drive synaptic plasticity and adaptation for behavioral success in living brains with different levels of complexity. The insights collected here point toward the Hebbian model as a choice solution for “intelligent” robotics and sensor systems. Keywords: Hebbian learning; synaptic plasticity; neural networks; self-organization; brain; reinforcement; sensory processing; robot contro

    From memory to processing : a reaction-diffusion approach to neuromorphic computing

    Get PDF
    The goal of this research is to bridge the gap between the physiological brain and mathematically based neuromorphic computing models. As such, the reaction-diffusion method was chosen as it can naturally exhibit properties like propagation of excitation that are seen in the brain, but not current neuromorphic computing models. A reaction-diffusion memory unit was created to demonstrate the key memory functions of sensitization, habituation, and dishabituation, while a reaction-diffusion brain module was established to perform the specific processing task of single-digit binary addition. The results from both approaches were consistent with existing literature detailing physiological memory and processing in the human brain
    • …
    corecore