1 research outputs found
Control-free and efficient integrated photonic neural networks via hardware-aware training and pruning
Integrated photonic neural networks (PNNs) are at the forefront of AI
computing, leveraging on light's unique properties, such as large bandwidth,
low latency, and potentially low power consumption. Nevertheless, the
integrated optical components within PNNs are inherently sensitive to external
disturbances and thermal interference, which can detrimentally affect computing
accuracy and reliability. Current solutions often use complicated control
methods, resulting in high hardware complexity impractical for large-scale
PNNs. In response, we propose a novel hardware-aware training and pruning
approach. The core idea is to train the parameters of a physical neural network
towards its noise-robust and energy-efficient region. This innovation enables
control-free and energy-efficient photonic computing. Our method is validated
across diverse integrated PNN architectures. Through experimental validation,
our approach significantly enhances the computing precision of MRR-based PNN,
achieving a notable 4-bit improvement without the need for complex device
control mechanisms or energy-intensive temperature stabilization circuits.
Specifically, it improves the accuracy of experimental handwritten digit
classification from 67.0% to 95.0%, nearing theoretical limits and achieved
without a thermoelectric controller. Additionally, this approach reduces the
energy by tenfold. We further extend the validation to various architectures,
such as PCM-based PNN, demonstrating the broad applicability of our approach
across different platforms. This advancement represents a significant step
towards the practical, energy-efficient, and noise-resilient implementation of
large-scale integrated PNNs.Comment: 21 pages, 6 figure