3 research outputs found

    Channel response-aware photonic neural network accelerators for high-speed inference through bandwidth-limited optics

    No full text
    Photonic neural network accelerators (PNNAs) have been lately brought into the spotlight as a new class of custom hardware that can leverage the maturity of photonic integration towards addressing the low-energy and computational power requirements of deep learning (DL) workloads. Transferring, however, the high-speed credentials of photonic circuitry into analogue neuromorphic computing necessitates a new set of DL training methods aligned along certain analogue photonic hardware characteristics. Herein, we present a novel channel response-aware (CRA) DL architecture that can address the implementation challenges of high-speed compute rates on bandwidth-limited photonic devices by incorporating their frequency response into the training procedure. The proposed architecture was validated both through software and experimentally by implementing the output layer of a neural network (NN) that classifies images of the MNIST dataset on an integrated SiPho coherent linear neuron (COLN) with a 3dB channel bandwidth of 7 GHz. A comparative analysis between the baseline and CRA model at 20, 25 and 32GMAC/sec/axon revealed respective experimental accuracies of 98.5%, 97.3% and 92.1% for the CRA model, outperforming the baseline model by 7.9%, 12.3% and 15.6%, respectively

    25GMAC/sec/axon photonic neural networks with 7GHz bandwidth optics through channel response-aware training

    No full text
    We present a channel response-aware Photonic Neural Network (PNN) and demonstrate experimentally its resilience in Inter-Symbol Interference (ISI) when implemented in an integrated neuron. The trained PNN model performs at 25GMAC/sec/axon using only 7GHz-bandwidth photonic axons with 97.37% accuracy in the MNIST dataset

    Neuromorphic silicon photonics and hardware-aware deep learning for high-speed inference

    No full text
    The relentless growth of Artificial Intelligence (AI) workloads has fueled the drive towards non-Von Neuman architectures and custom computing hardware. Neuromorphic photonic engines aspire to synergize the low-power and high-bandwidth credentials of light-based deployments with novel architectures, towards surpassing the computing performance of their electronic counterparts. In this paper, we review recent progress in integrated photonic neuromorphic architectures and analyze the architectural and photonic hardware-based factors that limit their performance. Subsequently, we present our approach towards transforming silicon coherent neuromorphic layouts into high-speed and high-accuracy Deep Learning (DL) engines by combining robust architectures with hardware-aware DL training. Circuit robustness is ensured through a crossbar layout that circumvents insertion loss and fidelity constraints of state-of-the-art linear optical designs. Concurrently, we employ DL training models adapted to the underlying photonic hardware, incorporating noise- and bandwidth-limitations together with the supported activation function directly into Neural Network (NN) training. We validate experimentally the high-speed and high-accuracy advantages of hardware-aware DL models when combined with robust architectures through a SiPho prototype implementing a single column of a 4:4 photonic crossbar. This was utilized as the pen-ultimate hidden layer of a NN, revealing up to 5.93% accuracy improvement at 5GMAC/sec/axon when noise-aware training is enforced and allowing accuracies of 99.15% and 79.8% for the MNIST and CIFAR-10 classification tasks. Channel-aware training was then demonstrated by integrating the frequency response of the photonic hardware in NN training, with its experimental validation with the MNIST dataset revealing an accuracy increase of 12.93% at a record-high rate of 25GMAC/sec/axon
    corecore