11 research outputs found
Volume holograms with linear diffraction efficiency relation by (3+1)D printing
We demonstrate the fabrication of volume holograms using 2-photon
polymerization with dynamic control of light exposure. We refer to our method
as (3+1)D printing. Volume holograms that are recorded by interfering reference
and signal beams have a diffraction efficiency relation that is inversely
proportional with the square of the number of superimposed holograms. By using
(3+1)D printing for fabrication, the refractive index of each voxel is created
independently and thus by, digitally filtering the undesired interference
terms, the diffraction efficiency is now inversely proportional to the number
of multiplexed gratings. We experimentally demonstrated this linear dependence
by recording M=50 volume gratings. To the best of our knowledge, this is the
first experimental demonstration of distributed volume holograms that overcome
the 1/M^2 limit.Comment: 8 pages, 9 figure
Nonlinear Processing with Linear Optics
Deep neural networks have achieved remarkable breakthroughs by leveraging
multiple layers of data processing to extract hidden representations, albeit at
the cost of large electronic computing power. To enhance energy efficiency and
speed, the optical implementation of neural networks aims to harness the
advantages of optical bandwidth and the energy efficiency of optical
interconnections. In the absence of low-power optical nonlinearities, the
challenge in the implementation of multilayer optical networks lies in
realizing multiple optical layers without resorting to electronic components.
In this study, we present a novel framework that uses multiple scattering that
is capable of synthesizing programmable linear and nonlinear transformations
concurrently at low optical power by leveraging the nonlinear relationship
between the scattering potential, represented by data, and the scattered field.
Theoretical and experimental investigations show that repeating the data by
multiple scattering enables non-linear optical computing at low power
continuous wave light.Comment: 20 pages, 9 figures and 1 tabl
Forward-Forward Training of an Optical Neural Network
Neural networks (NN) have demonstrated remarkable capabilities in various
tasks, but their computation-intensive nature demands faster and more
energy-efficient hardware implementations. Optics-based platforms, using
technologies such as silicon photonics and spatial light modulators, offer
promising avenues for achieving this goal. However, training multiple trainable
layers in tandem with these physical systems poses challenges, as they are
difficult to fully characterize and describe with differentiable functions,
hindering the use of error backpropagation algorithm. The recently introduced
Forward-Forward Algorithm (FFA) eliminates the need for perfect
characterization of the learning system and shows promise for efficient
training with large numbers of programmable parameters. The FFA does not
require backpropagating an error signal to update the weights, rather the
weights are updated by only sending information in one direction. The local
loss function for each set of trainable weights enables low-power analog
hardware implementations without resorting to metaheuristic algorithms or
reinforcement learning. In this paper, we present an experiment utilizing
multimode nonlinear wave propagation in an optical fiber demonstrating the
feasibility of the FFA approach using an optical system. The results show that
incorporating optical transforms in multilayer NN architectures trained with
the FFA, can lead to performance improvements, even with a relatively small
number of trainable weights. The proposed method offers a new path to the
challenge of training optical NNs and provides insights into leveraging
physical transformations for enhancing NN performance
Optical neural networks: The 3D connection
We motivate a canonical strategy for integrating photonic neural networks (NN) by leveraging 3D printing. Our belief is that a NN’s parallel and dense connectivity is not scalable without 3D integration. 3D additive fabrication complemented with photonic signal transduction can dramatically augment the current capabilities of 2D CMOS and integrated photonics. Here we review some of our recent advances made towards such an architecture
Volume holograms with linear diffraction efficiency relation by (3+1)D printing
We demonstrate the fabrication of volume holograms using 2-photon polymerization with dynamic control of light exposure. We refer to our method as (3+1)D printing. Volume holograms that are recorded by interfering reference and signal beams have a diffraction efficiency relation that is inversely proportional with the square of the number of superimposed holograms. By using (3+1)D printing for fabrication, the refractive index of each voxel is created independently and thus by, digitally filtering the undesired interference terms, the diffraction efficiency is now inversely proportional to the number of multiplexed gratings. We experimentally demonstrated this linear dependence by recording M=50 volume gratings. To the best of our knowledge, this is the first experimental demonstration of distributed volume holograms that overcome the 1/M^2 limit
Computer generated optical volume elements by additive manufacturing
Computer generated optical volume elements have been investigated for information storage, spectral filtering, and imaging applications. Advancements in additive manufacturing (3D printing) allow the fabrication of multilayered diffractive volume elements in the micro- scale. For a micro-scale multilayer design, an optimization scheme is needed to calculate the layers. The conventional way is to optimize a stack of 2D phase distributions and implement them by translating the phase into thickness variation. Optimizing directly in 3D can improve field reconstruction accuracy. Here we propose an optimization method by inverting the intended use of Learning Tomography, which is a method to reconstruct 3D phase objects from experimental recordings of 2D projections of the 3D object. The forward model in the optimization is the beam propagation method (BPM). The iterative error reduction scheme and the multilayer structure of the BPM are similar to neural networks. Therefore, this method is referred to as Learning Tomography. Here, instead of imaging an object, we reconstruct the 3D structure that performs the desired task as defined by its input-output functionality. We present the optimization methodology, the comparison by simulation work and the experimental verification of the approach. We demonstrate an optical volume element that performs angular multiplexing of two plane waves to yield two linearly polarized fiber modes in a total volume of 128 mu m by 128 mu m by 170 mu m
Computer generated optical volume elements by additive manufacturing
Computer generated optical volume elements have been investigated for information storage, spectral filtering, and imaging applications. Advancements in additive manufacturing (3D printing) allow the fabrication of multilayered diffractive volume elements in the micro-scale. For a micro-scale multilayer design, an optimization scheme is needed to calculate the layers. The conventional way is to optimize a stack of 2D phase distributions and implement them by translating the phase into thickness variation. Optimizing directly in 3D can improve field reconstruction accuracy. Here we propose an optimization method by inverting the intended use of Learning Tomography, which is a method to reconstruct 3D phase objects from experimental recordings of 2D projections of the 3D object. The forward model in the optimization is the beam propagation method (BPM). The iterative error reduction scheme and the multilayer structure of the BPM are similar to neural networks. Therefore, this method is referred to as Learning Tomography. Here, instead of imaging an object, we reconstruct the 3D structure that performs the desired task as defined by its input-output functionality. We present the optimization methodology, the comparison by simulation work and the experimental verification of the approach. We demonstrate an optical volume element that performs angular multiplexing of two plane waves to yield two linearly polarized fiber modes in a total volume of 128Â ÎĽm by 128Â ÎĽm by 170Â ÎĽm
Multicasting Optical Reconfigurable Switch
Artificial Intelligence (AI) demands large data flows within datacenters, heavily relying on multicasting data transfers. As AI models scale, the requirement for high-bandwidth and low-latency networking compounds. The common use of electrical packet switching faces limitations due to its optical-electrical-optical conversion bottleneck. Optical switches, while bandwidth-agnostic and low-latency, suffer from having only unicast or non-scalable multicasting capability. This paper introduces an optical switching technique addressing the scalable multicasting challenge. Our approach enables arbitrarily programmable simultaneous unicast and multicast connectivity, eliminating the need for optical splitters that hinder scalability due to optical power loss. We use phase modulation in multiple planes, tailored to implement any multicast connectivity map. Using phase modulation enables wavelength selectivity on top of spatial selectivity, resulting in an optical switch that implements space-wavelength routing. We conducted simulations and experiments to validate our approach. Our results affirm the concept's feasibility and effectiveness, as a multicasting switch
Nonlinear Processing with Linear Optics
Deep neural networks have achieved remarkable breakthroughs by leveraging multiple layers of data processing to extract hidden representations, albeit at the cost of large electronic computing power. To enhance energy efficiency and speed, the optical implementation of neural networks aims to harness the advantages of optical bandwidth and the energy efficiency of optical interconnections. In the absence of low-power optical nonlinearities, the challenge in the implementation of multilayer optical networks lies in realizing multiple optical layers without resorting to electronic components. In this study, we present a novel framework that uses multiple scattering that is capable of synthesizing programmable linear and nonlinear transformations concurrently at low optical power by leveraging the nonlinear relationship between the scattering potential, represented by data, and the scattered field. Theoretical and experimental investigations show that repeating the data by multiple scattering enables non-linear optical computing at low power continuous wave light