15 research outputs found
DeepSTORM3D: dense three dimensional localization microscopy and point spread function design by deep learning
Localization microscopy is an imaging technique in which the positions of
individual nanoscale point emitters (e.g. fluorescent molecules) are determined
at high precision from their images. This is the key ingredient in
single/multiple-particle-tracking and several super-resolution microscopy
approaches. Localization in three-dimensions (3D) can be performed by modifying
the image that a point-source creates on the camera, namely, the point-spread
function (PSF). The PSF is engineered using additional optical elements to vary
distinctively with the depth of the point-source. However, localizing multiple
adjacent emitters in 3D poses a significant algorithmic challenge, due to the
lateral overlap of their PSFs. Here, we train a neural network to receive an
image containing densely overlapping PSFs of multiple emitters over a large
axial range and output a list of their 3D positions. Furthermore, we then use
the network to design the optimal PSF for the multi-emitter case. We
demonstrate our approach numerically as well as experimentally by 3D STORM
imaging of mitochondria, and volumetric imaging of dozens of
fluorescently-labeled telomeres occupying a mammalian nucleus in a single
snapshot.Comment: main text: 9 pages, 5 figures, supplementary information: 29 pages,
20 figure
Joint Image and Depth Estimation With Mask-Based Lensless Cameras
Mask-based lensless cameras replace the lens of a conventional camera with a
custom mask. These cameras can potentially be very thin and even flexible.
Recently, it has been demonstrated that such mask-based cameras can recover
light intensity and depth information of a scene. Existing depth recovery
algorithms either assume that the scene consists of a small number of depth
planes or solve a sparse recovery problem over a large 3D volume. Both these
approaches fail to recover the scenes with large depth variations. In this
paper, we propose a new approach for depth estimation based on an alternating
gradient descent algorithm that jointly estimates a continuous depth map and
light distribution of the unknown scene from its lensless measurements. We
present simulation results on image and depth reconstruction for a variety of
3D test scenes. A comparison between the proposed algorithm and other method
shows that our algorithm is more robust for natural scenes with a large range
of depths. We built a prototype lensless camera and present experimental
results for reconstruction of intensity and depth maps of different real
objects
Thin On-Sensor Nanophotonic Array Cameras
Today's commodity camera systems rely on compound optics to map light
originating from the scene to positions on the sensor where it gets recorded as
an image. To record images without optical aberrations, i.e., deviations from
Gauss' linear model of optics, typical lens systems introduce increasingly
complex stacks of optical elements which are responsible for the height of
existing commodity cameras. In this work, we investigate flat nanophotonic
computational cameras as an alternative that employs an array of skewed
lenslets and a learned reconstruction approach. The optical array is embedded
on a metasurface that, at 700~nm height, is flat and sits on the sensor cover
glass at 2.5~mm focal distance from the sensor. To tackle the highly chromatic
response of a metasurface and design the array over the entire sensor, we
propose a differentiable optimization method that continuously samples over the
visible spectrum and factorizes the optical modulation for different incident
fields into individual lenses. We reconstruct a megapixel image from our flat
imager with a learned probabilistic reconstruction method that employs a
generative diffusion model to sample an implicit prior. To tackle
scene-dependent aberrations in broadband, we propose a method for acquiring
paired captured training data in varying illumination conditions. We assess the
proposed flat camera design in simulation and with an experimental prototype,
validating that the method is capable of recovering images from diverse scenes
in broadband with a single nanophotonic layer.Comment: 18 pages, 12 figures, to be published in ACM Transactions on Graphic
Analysis of Diffractive Neural Networks for Seeing Through Random Diffusers
Imaging through diffusive media is a challenging problem, where the existing
solutions heavily rely on digital computers to reconstruct distorted images. We
provide a detailed analysis of a computer-free, all-optical imaging method for
seeing through random, unknown phase diffusers using diffractive neural
networks, covering different deep learning-based training strategies. By
analyzing various diffractive networks designed to image through random
diffusers with different correlation lengths, a trade-off between the image
reconstruction fidelity and distortion reduction capability of the diffractive
network was observed. During its training, random diffusers with a range of
correlation lengths were used to improve the diffractive network's
generalization performance. Increasing the number of random diffusers used in
each epoch reduced the overfitting of the diffractive network's imaging
performance to known diffusers. We also demonstrated that the use of additional
diffractive layers improved the generalization capability to see through new,
random diffusers. Finally, we introduced deliberate misalignments in training
to 'vaccinate' the network against random layer-to-layer shifts that might
arise due to the imperfect assembly of the diffractive networks. These analyses
provide a comprehensive guide in designing diffractive networks to see through
random diffusers, which might profoundly impact many fields, such as biomedical
imaging, atmospheric physics, and autonomous driving.Comment: 42 Pages, 9 Figure