4,392 research outputs found
A Compressive Multi-Mode Superresolution Display
Compressive displays are an emerging technology exploring the co-design of
new optical device configurations and compressive computation. Previously,
research has shown how to improve the dynamic range of displays and facilitate
high-quality light field or glasses-free 3D image synthesis. In this paper, we
introduce a new multi-mode compressive display architecture that supports
switching between 3D and high dynamic range (HDR) modes as well as a new
super-resolution mode. The proposed hardware consists of readily-available
components and is driven by a novel splitting algorithm that computes the pixel
states from a target high-resolution image. In effect, the display pixels
present a compressed representation of the target image that is perceived as a
single, high resolution image.Comment: Technical repor
Accurate and robust image superresolution by neural processing of local image representations
Image superresolution involves the processing of an image sequence to generate a still image with higher resolution. Classical approaches, such as bayesian MAP methods, require iterative minimization procedures, with high computational costs. Recently, the authors proposed a method to tackle this problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we present a novel superresolution method, based on an evolution of this concept, to incorporate the use of local image models. A neural processing stage receives as input the value of model coefficients on local windows. The data dimension-ality is firstly reduced by application of PCA. An MLP, trained on synthetic se-quences with various amounts of noise, estimates the high-resolution image data. The effect of varying the dimension of the network input space is exam-ined, showing a complex, structured behavior. Quantitative results are presented showing the accuracy and robustness of the proposed method
Depth Superresolution using Motion Adaptive Regularization
Spatial resolution of depth sensors is often significantly lower compared to
that of conventional optical cameras. Recent work has explored the idea of
improving the resolution of depth using higher resolution intensity as a side
information. In this paper, we demonstrate that further incorporating temporal
information in videos can significantly improve the results. In particular, we
propose a novel approach that improves depth resolution, exploiting the
space-time redundancy in the depth and intensity using motion-adaptive low-rank
regularization. Experiments confirm that the proposed approach substantially
improves the quality of the estimated high-resolution depth. Our approach can
be a first component in systems using vision techniques that rely on high
resolution depth information
Quantum-limited estimation of the axial separation of two incoherent point sources
Improving axial resolution is crucial for three-dimensional optical imaging
systems. Here we present a scheme of axial superresolution for two incoherent
point sources based on spatial mode demultiplexing. A radial mode sorter is
used to losslessly decompose the optical fields into a radial mode basis set to
extract the phase information associated with the axial positions of the point
sources. We show theoretically and experimentally that, in the limit of a zero
axial separation, our scheme allows for reaching the quantum Cram\'er-Rao lower
bound and thus can be considered as one of the optimal measurement methods.
Unlike other superresolution schemes, this scheme does not require neither
activation of fluorophores nor sophisticated stabilization control. Moreover,
it is applicable to the localization of a single point source in the axial
direction. Our demonstration can be useful to a variety of applications such as
far-field fluorescence microscopy.Comment: Comments are welcom
- …