32 research outputs found
Analysis of Diffractive Neural Networks for Seeing Through Random Diffusers
Imaging through diffusive media is a challenging problem, where the existing
solutions heavily rely on digital computers to reconstruct distorted images. We
provide a detailed analysis of a computer-free, all-optical imaging method for
seeing through random, unknown phase diffusers using diffractive neural
networks, covering different deep learning-based training strategies. By
analyzing various diffractive networks designed to image through random
diffusers with different correlation lengths, a trade-off between the image
reconstruction fidelity and distortion reduction capability of the diffractive
network was observed. During its training, random diffusers with a range of
correlation lengths were used to improve the diffractive network's
generalization performance. Increasing the number of random diffusers used in
each epoch reduced the overfitting of the diffractive network's imaging
performance to known diffusers. We also demonstrated that the use of additional
diffractive layers improved the generalization capability to see through new,
random diffusers. Finally, we introduced deliberate misalignments in training
to 'vaccinate' the network against random layer-to-layer shifts that might
arise due to the imperfect assembly of the diffractive networks. These analyses
provide a comprehensive guide in designing diffractive networks to see through
random diffusers, which might profoundly impact many fields, such as biomedical
imaging, atmospheric physics, and autonomous driving.Comment: 42 Pages, 9 Figure
Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network
Quantitative phase imaging (QPI) is a label-free computational imaging
technique used in various fields, including biology and medical research.
Modern QPI systems typically rely on digital processing using iterative
algorithms for phase retrieval and image reconstruction. Here, we report a
diffractive optical network trained to convert the phase information of input
objects positioned behind random diffusers into intensity variations at the
output plane, all-optically performing phase recovery and quantitative imaging
of phase objects completely hidden by unknown, random phase diffusers. This QPI
diffractive network is composed of successive diffractive layers, axially
spanning in total ~70 wavelengths; unlike existing digital image reconstruction
and phase retrieval methods, it forms an all-optical processor that does not
require external power beyond the illumination beam to complete its QPI
reconstruction at the speed of light propagation. This all-optical diffractive
processor can provide a low-power, high frame rate and compact alternative for
quantitative imaging of phase objects through random, unknown diffusers and can
operate at different parts of the electromagnetic spectrum for various
applications in biomedical imaging and sensing. The presented QPI diffractive
designs can be integrated onto the active area of standard CCD/CMOS-based image
sensors to convert an existing optical microscope into a diffractive QPI
microscope, performing phase recovery and image reconstruction on a chip
through light diffraction within passive structured layers.Comment: 27 Pages, 7 Figure
Data class-specific all-optical transformations and encryption
Diffractive optical networks provide rich opportunities for visual computing
tasks since the spatial information of a scene can be directly accessed by a
diffractive processor without requiring any digital pre-processing steps. Here
we present data class-specific transformations all-optically performed between
the input and output fields-of-view (FOVs) of a diffractive network. The visual
information of the objects is encoded into the amplitude (A), phase (P), or
intensity (I) of the optical field at the input, which is all-optically
processed by a data class-specific diffractive network. At the output, an image
sensor-array directly measures the transformed patterns, all-optically
encrypted using the transformation matrices pre-assigned to different data
classes, i.e., a separate matrix for each data class. The original input images
can be recovered by applying the correct decryption key (the inverse
transformation) corresponding to the matching data class, while applying any
other key will lead to loss of information. The class-specificity of these
all-optical diffractive transformations creates opportunities where different
keys can be distributed to different users; each user can only decode the
acquired images of only one data class, serving multiple users in an
all-optically encrypted manner. We numerically demonstrated all-optical
class-specific transformations covering A-->A, I-->I, and P-->I transformations
using various image datasets. We also experimentally validated the feasibility
of this framework by fabricating a class-specific I-->I transformation
diffractive network using two-photon polymerization and successfully tested it
at 1550 nm wavelength. Data class-specific all-optical transformations provide
a fast and energy-efficient method for image and data encryption, enhancing
data security and privacy.Comment: 27 Pages, 9 Figures, 1 Tabl
Universal Linear Intensity Transformations Using Spatially-Incoherent Diffractive Processors
Under spatially-coherent light, a diffractive optical network composed of
structured surfaces can be designed to perform any arbitrary complex-valued
linear transformation between its input and output fields-of-view (FOVs) if the
total number (N) of optimizable phase-only diffractive features is greater than
or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at
the input and the output FOVs, respectively. Here we report the design of a
spatially-incoherent diffractive optical processor that can approximate any
arbitrary linear transformation in time-averaged intensity between its input
and output FOVs. Under spatially-incoherent monochromatic light, the
spatially-varying intensity point spread functon(H) of a diffractive network,
corresponding to a given, arbitrarily-selected linear intensity transformation,
can be written as H(m,n;m',n')=|h(m,n;m',n')|^2, where h is the
spatially-coherent point-spread function of the same diffractive network, and
(m,n) and (m',n') define the coordinates of the output and input FOVs,
respectively. Using deep learning, supervised through examples of input-output
profiles, we numerically demonstrate that a spatially-incoherent diffractive
network can be trained to all-optically perform any arbitrary linear intensity
transformation between its input and output if N is greater than or equal to ~2
Ni x No. These results constitute the first demonstration of universal linear
intensity transformations performed on an input FOV under spatially-incoherent
illumination and will be useful for designing all-optical visual processors
that can work with incoherent, natural light.Comment: 29 Pages, 10 Figure
Pyramid diffractive optical networks for unidirectional magnification and demagnification
Diffractive deep neural networks (D2NNs) are composed of successive
transmissive layers optimized using supervised deep learning to all-optically
implement various computational tasks between an input and output field-of-view
(FOV). Here, we present a pyramid-structured diffractive optical network design
(which we term P-D2NN), optimized specifically for unidirectional image
magnification and demagnification. In this P-D2NN design, the diffractive
layers are pyramidally scaled in alignment with the direction of the image
magnification or demagnification. Our analyses revealed the efficacy of this
P-D2NN design in unidirectional image magnification and demagnification tasks,
producing high-fidelity magnified or demagnified images in only one direction,
while inhibiting the image formation in the opposite direction - confirming the
desired unidirectional imaging operation. Compared to the conventional D2NN
designs with uniform-sized successive diffractive layers, P-D2NN design
achieves similar performance in unidirectional magnification tasks using only
half of the diffractive degrees of freedom within the optical processor volume.
Furthermore, it maintains its unidirectional image
magnification/demagnification functionality across a large band of illumination
wavelengths despite being trained with a single illumination wavelength. With
this pyramidal architecture, we also designed a wavelength-multiplexed
diffractive network, where a unidirectional magnifier and a unidirectional
demagnifier operate simultaneously in opposite directions, at two distinct
illumination wavelengths. The efficacy of the P-D2NN architecture was also
validated experimentally using monochromatic terahertz illumination,
successfully matching our numerical simulations. P-D2NN offers a
physics-inspired strategy for designing task-specific visual processors.Comment: 26 Pages, 7 Figure
Deep learning-based holographic polarization microscopy
Polarized light microscopy provides high contrast to birefringent specimen
and is widely used as a diagnostic tool in pathology. However, polarization
microscopy systems typically operate by analyzing images collected from two or
more light paths in different states of polarization, which lead to relatively
complex optical designs, high system costs or experienced technicians being
required. Here, we present a deep learning-based holographic polarization
microscope that is capable of obtaining quantitative birefringence retardance
and orientation information of specimen from a phase recovered hologram, while
only requiring the addition of one polarizer/analyzer pair to an existing
holographic imaging system. Using a deep neural network, the reconstructed
holographic images from a single state of polarization can be transformed into
images equivalent to those captured using a single-shot computational polarized
light microscope (SCPLM). Our analysis shows that a trained deep neural network
can extract the birefringence information using both the sample specific
morphological features as well as the holographic amplitude and phase
distribution. To demonstrate the efficacy of this method, we tested it by
imaging various birefringent samples including e.g., monosodium urate (MSU) and
triamcinolone acetonide (TCA) crystals. Our method achieves similar results to
SCPLM both qualitatively and quantitatively, and due to its simpler optical
design and significantly larger field-of-view, this method has the potential to
expand the access to polarization microscopy and its use for medical diagnosis
in resource limited settings.Comment: 20 pages, 8 figure
Lab-in-a-Tube: A portable imaging spectrophotometer for cost-effective, high-throughput, and label-free analysis of centrifugation processes
Centrifuges serve as essential instruments in modern experimental sciences,
facilitating a wide range of routine sample processing tasks that necessitate
material sedimentation. However, the study for real time observation of the
dynamical process during centrifugation has remained elusive. In this study, we
developed an innovative Lab_in_a_Tube imaging spectrophotometer that
incorporates capabilities of real time image analysis and programmable
interruption. This portable LIAT device costs less than 30 US dollars. Based on
our knowledge, it is the first Wi Fi camera built_in in common lab centrifuges
with active closed_loop control. We tested our LIAT imaging spectrophotometer
with solute solvent interaction investigation obtained from lab centrifuges
with quantitative data plotting in a real time manner. Single re circulating
flow was real time observed, forming the ring shaped pattern during
centrifugation. To the best of our knowledge, this is the very first
observation of similar phenomena. We developed theoretical simulations for the
single particle in a rotating reference frame, which correlated well with
experimental results. We also demonstrated the first demonstration to visualize
the blood sedimentation process in clinical lab centrifuges. This remarkable
cost effectiveness opens up exciting opportunities for centrifugation
microbiology research and paves the way for the creation of a network of
computational imaging spectrometers at an affordable price for large scale and
continuous monitoring of centrifugal processes in general.Comment: 21 Pages, 6 Figure
Early-detection and classification of live bacteria using time-lapse coherent imaging and deep learning
We present a computational live bacteria detection system that periodically
captures coherent microscopy images of bacterial growth inside a 60 mm diameter
agar-plate and analyzes these time-lapsed holograms using deep neural networks
for rapid detection of bacterial growth and classification of the corresponding
species. The performance of our system was demonstrated by rapid detection of
Escherichia coli and total coliform bacteria (i.e., Klebsiella aerogenes and
Klebsiella pneumoniae subsp. pneumoniae) in water samples. These results were
confirmed against gold-standard culture-based results, shortening the detection
time of bacterial growth by >12 h as compared to the Environmental Protection
Agency (EPA)-approved analytical methods. Our experiments further confirmed
that this method successfully detects 90% of bacterial colonies within 7-10 h
(and >95% within 12 h) with a precision of 99.2-100%, and correctly identifies
their species in 7.6-12 h with 80% accuracy. Using pre-incubation of samples in
growth media, our system achieved a limit of detection (LOD) of ~1 colony
forming unit (CFU)/L within 9 h of total test time. This computational bacteria
detection and classification platform is highly cost-effective (~$0.6 per test)
and high-throughput with a scanning speed of 24 cm2/min over the entire plate
surface, making it highly suitable for integration with the existing analytical
methods currently used for bacteria detection on agar plates. Powered by deep
learning, this automated and cost-effective live bacteria detection platform
can be transformative for a wide range of applications in microbiology by
significantly reducing the detection time, also automating the identification
of colonies, without labeling or the need for an expert.Comment: 24 pages, 6 figure
Recommended from our members
Computational cytometer based on magnetically modulated coherent imaging and deep learning.
Detecting rare cells within blood has numerous applications in disease diagnostics. Existing rare cell detection techniques are typically hindered by their high cost and low throughput. Here, we present a computational cytometer based on magnetically modulated lensless speckle imaging, which introduces oscillatory motion to the magnetic-bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three dimensions (3D). In addition to using cell-specific antibodies to magnetically label target cells, detection specificity is further enhanced through a deep-learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. To demonstrate the performance of this technique, we built a high-throughput, compact and cost-effective prototype for detecting MCF7 cancer cells spiked in whole blood samples. Through serial dilution experiments, we quantified the limit of detection (LoD) as 10 cells per millilitre of whole blood, which could be further improved through multiplexing parallel imaging channels within the same instrument. This compact, cost-effective and high-throughput computational cytometer can potentially be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications