A 128-channel real-time VPDNN stimulation system for a visual cortical neuroprosthesis

Abstract

With the recent progress in developing large-scale micro-electrodes, cortical neuroprotheses supporting hundreds of electrodes will be viable in the near future. We describe work in building a visual stimulation system that receives camera input images and produces stimulation patterns for driving a large set of electrodes. The system consists of a convolutional neural network FPGA accelerator and a recording and stimulation Application-Specific Integrated Circuit (ASIC) that produces the stimulation patterns. It is aimed at restoring visual perception in visually impaired subjects. The FPGA accelerator, VPDNN, runs a visual prosthesis network that generates an output used to create stimulation patterns, which are then converted by the ASIC into current pulses to drive a multi-electrode array. The accelerator exploits spatial sparsity and the use of reduced bit precision parameters for reduced computation, memory and power for portability. Experimental results from the VPDNN show that the 94.5K parameter 14-layer CNN receiving an input of 128 × 128 has an inference frame rate of 83 frames per sec (FPS) and uses only an incremental power of 0.1 W, which is at least 10× lower than that measured from a Jetson Nano. The ASIC adds a maximum delay of 2ms, however it does not impact the FPS thanks to double-buffered memory. Index Terms—Visual prosthesis, convolutional neural network, FPGA Accelerator, stimulation and recording ASI

    Similar works