1,229 research outputs found
1-bit Quantized On-chip Hybrid Diffraction Neural Network Enabled by Authentic All-optical Fully-connected Architecture
Optical Diffraction Neural Networks (DNNs), a subset of Optical Neural
Networks (ONNs), show promise in mirroring the prowess of electronic networks.
This study introduces the Hybrid Diffraction Neural Network (HDNN), a novel
architecture that incorporates matrix multiplication into DNNs, synergizing the
benefits of conventional ONNs with those of DNNs to surmount the modulation
limitations inherent in optical diffraction neural networks. Utilizing a
singular phase modulation layer and an amplitude modulation layer, the trained
neural network demonstrated remarkable accuracies of 96.39% and 89% in digit
recognition tasks in simulation and experiment, respectively. Additionally, we
develop the Binning Design (BD) method, which effectively mitigates the
constraints imposed by sampling intervals on diffraction units, substantially
streamlining experimental procedures. Furthermore, we propose an on-chip HDNN
that not only employs a beam-splitting phase modulation layer for enhanced
integration level but also significantly relaxes device fabrication
requirements, replacing metasurfaces with relief surfaces designed by 1-bit
quantization. Besides, we conceptualized an all-optical HDNN-assisted lesion
detection network, achieving detection outcomes that were 100% aligned with
simulation predictions. This work not only advances the performance of DNNs but
also streamlines the path towards industrial optical neural network production
Large-Scale Optical Neural Networks based on Photoelectric Multiplication
Recent success in deep neural networks has generated strong interest in
hardware accelerators to improve speed and energy consumption. This paper
presents a new type of photonic accelerator based on coherent detection that is
scalable to large () networks and can be operated at high (GHz)
speeds and very low (sub-aJ) energies per multiply-and-accumulate (MAC), using
the massive spatial multiplexing enabled by standard free-space optical
components. In contrast to previous approaches, both weights and inputs are
optically encoded so that the network can be reprogrammed and trained on the
fly. Simulations of the network using models for digit- and
image-classification reveal a "standard quantum limit" for optical neural
networks, set by photodetector shot noise. This bound, which can be as low as
50 zJ/MAC, suggests performance below the thermodynamic (Landauer) limit for
digital irreversible computation is theoretically possible in this device. The
proposed accelerator can implement both fully-connected and convolutional
networks. We also present a scheme for back-propagation and training that can
be performed in the same hardware. This architecture will enable a new class of
ultra-low-energy processors for deep learning.Comment: Text: 10 pages, 5 figures, 1 table. Supplementary: 8 pages, 5,
figures, 2 table
Spectrally-Encoded Single-Pixel Machine Vision Using Diffractive Networks
3D engineering of matter has opened up new avenues for designing systems that
can perform various computational tasks through light-matter interaction. Here,
we demonstrate the design of optical networks in the form of multiple
diffractive layers that are trained using deep learning to transform and encode
the spatial information of objects into the power spectrum of the diffracted
light, which are used to perform optical classification of objects with a
single-pixel spectroscopic detector. Using a time-domain spectroscopy setup
with a plasmonic nanoantenna-based detector, we experimentally validated this
machine vision framework at terahertz spectrum to optically classify the images
of handwritten digits by detecting the spectral power of the diffracted light
at ten distinct wavelengths, each representing one class/digit. We also report
the coupling of this spectral encoding achieved through a diffractive optical
network with a shallow electronic neural network, separately trained to
reconstruct the images of handwritten digits based on solely the spectral
information encoded in these ten distinct wavelengths within the diffracted
light. These reconstructed images demonstrate task-specific image decompression
and can also be cycled back as new inputs to the same diffractive network to
improve its optical object classification. This unique machine vision framework
merges the power of deep learning with the spatial and spectral processing
capabilities of diffractive networks, and can also be extended to other
spectral-domain measurement systems to enable new 3D imaging and sensing
modalities integrated with spectrally encoded classification tasks performed
through diffractive optical networks.Comment: 21 pages, 5 figures, 1 tabl
Ensemble learning of diffractive optical networks
A plethora of research advances have emerged in the fields of optics and
photonics that benefit from harnessing the power of machine learning.
Specifically, there has been a revival of interest in optical computing
hardware, due to its potential advantages for machine learning tasks in terms
of parallelization, power efficiency and computation speed. Diffractive Deep
Neural Networks (D2NNs) form such an optical computing framework, which
benefits from deep learning-based design of successive diffractive layers to
all-optically process information as the input light diffracts through these
passive layers. D2NNs have demonstrated success in various tasks, including
e.g., object classification, spectral-encoding of information, optical pulse
shaping and imaging, among others. Here, we significantly improve the inference
performance of diffractive optical networks using feature engineering and
ensemble learning. After independently training a total of 1252 D2NNs that were
diversely engineered with a variety of passive input filters, we applied a
pruning algorithm to select an optimized ensemble of D2NNs that collectively
improve their image classification accuracy. Through this pruning, we
numerically demonstrated that ensembles of N=14 and N=30 D2NNs achieve blind
testing accuracies of 61.14% and 62.13%, respectively, on the classification of
CIFAR-10 test images, providing an inference improvement of >16% compared to
the average performance of the individual D2NNs within each ensemble. These
results constitute the highest inference accuracies achieved to date by any
diffractive optical neural network design on the same dataset and might provide
a significant leapfrog to extend the application space of diffractive optical
image classification and machine vision systems.Comment: 22 Pages, 4 Figures, 1 Tabl
- …