6,101 research outputs found
Photon-Efficient Computational 3D and Reflectivity Imaging with Single-Photon Detectors
Capturing depth and reflectivity images at low light levels from active
illumination of a scene has wide-ranging applications. Conventionally, even
with single-photon detectors, hundreds of photon detections are needed at each
pixel to mitigate Poisson noise. We develop a robust method for estimating
depth and reflectivity using on the order of 1 detected photon per pixel
averaged over the scene. Our computational imager combines physically accurate
single-photon counting statistics with exploitation of the spatial correlations
present in real-world reflectivity and 3D structure. Experiments conducted in
the presence of strong background light demonstrate that our computational
imager is able to accurately recover scene depth and reflectivity, while
traditional maximum-likelihood based imaging methods lead to estimates that are
highly noisy. Our framework increases photon efficiency 100-fold over
traditional processing and also improves, somewhat, upon first-photon imaging
under a total acquisition time constraint in raster-scanned operation. Thus our
new imager will be useful for rapid, low-power, and noise-tolerant active
optical imaging, and its fixed dwell time will facilitate parallelization
through use of a detector array.Comment: 11 pages, 8 figure
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
Learning a Dilated Residual Network for SAR Image Despeckling
In this paper, to break the limit of the traditional linear models for
synthetic aperture radar (SAR) image despeckling, we propose a novel deep
learning approach by learning a non-linear end-to-end mapping between the noisy
and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is
based on dilated convolutions, which can both enlarge the receptive field and
maintain the filter size and layer depth with a lightweight structure. In
addition, skip connections and residual learning strategy are added to the
despeckling model to maintain the image details and reduce the vanishing
gradient problem. Compared with the traditional despeckling methods, the
proposed method shows superior performance over the state-of-the-art methods on
both quantitative and visual assessments, especially for strong speckle noise.Comment: 18 pages, 13 figures, 7 table
- …