2 research outputs found

    The importance of space and time in neuromorphic cognitive agents

    Full text link
    Artificial neural networks and computational neuroscience models have made tremendous progress, allowing computers to achieve impressive results in artificial intelligence (AI) applications, such as image recognition, natural language processing, or autonomous driving. Despite this remarkable progress, biological neural systems consume orders of magnitude less energy than today's artificial neural networks and are much more agile and adaptive. This efficiency and adaptivity gap is partially explained by the computing substrate of biological neural processing systems that is fundamentally different from the way today's computers are built. Biological systems use in-memory computing elements operating in a massively parallel way rather than time-multiplexed computing units that are reused in a sequential fashion. Moreover, activity of biological neurons follows continuous-time dynamics in real, physical time, instead of operating on discrete temporal cycles abstracted away from real-time. Here, we present neuromorphic processing devices that emulate the biological style of processing by using parallel instances of mixed-signal analog/digital circuits that operate in real time. We argue that this approach brings significant advantages in efficiency of computation. We show examples of embodied neuromorphic agents that use such devices to interact with the environment and exhibit autonomous learning

    Efficient Synapse Memory Structure for Reconfigurable Digital Neuromorphic Hardware

    Get PDF
    Spiking Neural Networks (SNNs) have high potential to process information efficiently with binary spikes and time delay information. Recently, dedicated SNN hardware accelerators with on-chip synapse memory array are gaining interest in overcoming the limitations of running software-based SNN in conventional Von Neumann machines. In this paper, we proposed an efficient synapse memory structure to reduce the amount of hardware resource usage while maintaining performance and network size. In the proposed design, synapse memory size can be reduced by applying presynaptic weight scaling. In addition, axonal/neuronal offsets are applied to implement multiple layers on a single memory array. Finally, a transposable memory addressing scheme is presented for faster operation of spike-timing-dependent plasticity (STDP) learning. We implemented a SNN ASIC chip based on the proposed scheme with 65 nm CMOS technology. Chip measurement results showed that the proposed design provided up to 200X speedup over CPU while consuming 53 mW at 100 MHz with the energy efficiency of 15.2 pJ/SOP
    corecore