1 research outputs found

    Deep Learning in Neuronal and Neuromorphic Systems

    Get PDF
    The ever-increasing compute and energy requirements in the field of deep learning have caused a rising interest in the development of novel, more energy-efficient computing paradigms to support the advancement of artificial intelligence systems. Neuromorphic architectures are promising candidates, as they aim to mimic the functional mechanisms, and thereby inherit the efficiency, of their archetype: the brain. However, even though neuromorphics and deep learning are, at their roots, inspired by the brain, they are not directly compatible with each other. In this thesis, we aim at bridging this gap by realizing error backpropagation, the central algorithm behind deep learning, on neuromorphic platforms. We start by introducing the Yin-Yang classification dataset, a tool for neuromorphic and algorithmic prototyping, as a prerequisite for the other work presented. This novel dataset is designed to not require excessive hardware or computing resources to be solved. At the same time, it is challenging enough to be useful for debugging and testing by revealing potential algorithmic or implementation flaws. We then explore two different approaches of implementing error backpropagation on neuromorphic systems. Our first solution provides an exact algorithm for error backpropagation on the first spike times of leaky integrate-andfire neurons, one of the most common neuron models implemented in neuromorphic chips. The neuromorphic feasibility is demonstrated by the deployment on the BrainScaleS-2 chip and yields competitive results both with respect to task performance as well as efficiency. The second approach is based on a biologically plausible variant of error backpropagation realized by a dendritc microcircuit model. We assess this model with respect to its practical feasibility, extend it to improve learning performance and address the obstacles for neuromorphic implementation: We introduce the Latent Equilibrium mechanism to solve the relaxation problem introduced by slow neuron dynamics. Our Phaseless Alignment Learning method allows us to learn feedback weights in the network and thus avoid the weight transport problem. And finally, we explore two methods to port the rate-based model onto an event-based neuromorphic system. The presented work showcases two ways of uniting the powerful and flexible learning mechanisms of deep learning with energy-efficient neuromorphic systems, thus illustrating the potential of a convergence of artificial intelligence and neuromorphic engineering research
    corecore