91,758 research outputs found
Light Field Super-Resolution Via Graph-Based Regularization
Light field cameras capture the 3D information in a scene with a single
exposure. This special feature makes light field cameras very appealing for a
variety of applications: from post-capture refocus, to depth estimation and
image-based rendering. However, light field cameras suffer by design from
strong limitations in their spatial resolution, which should therefore be
augmented by computational methods. On the one hand, off-the-shelf single-frame
and multi-frame super-resolution algorithms are not ideal for light field data,
as they do not consider its particular structure. On the other hand, the few
super-resolution algorithms explicitly tailored for light field data exhibit
significant limitations, such as the need to estimate an explicit disparity map
at each view. In this work we propose a new light field super-resolution
algorithm meant to address these limitations. We adopt a multi-frame alike
super-resolution approach, where the complementary information in the different
light field views is used to augment the spatial resolution of the whole light
field. We show that coupling the multi-frame approach with a graph regularizer,
that enforces the light field structure via nonlocal self similarities, permits
to avoid the costly and challenging disparity estimation step for all the
views. Extensive experiments show that the new algorithm compares favorably to
the other state-of-the-art methods for light field super-resolution, both in
terms of PSNR and visual quality.Comment: This new version includes more material. In particular, we added: a
new section on the computational complexity of the proposed algorithm,
experimental comparisons with a CNN-based super-resolution algorithm, and new
experiments on a third datase
Graph-Based Light Field Super-Resolution
Light field cameras can capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post capture refocus, to depth estimation and image-based rendering. However, light field cameras exhibit a very limited spatial resolution, which should therefore be increased by computational methods. Off-the-shelf single-frame and multi-frame super-resolution algorithms are not ideal for light field data, as they ignore its particular structure. A few super-resolution algorithms explicitly devised for light field data exist, but they exhibit significant limitations, such as the need to carry out an explicit disparity estimation step for one or several light field views. In this work we present a new light field super-resolution algorithm meant to address these limitations. We adopt a multi- frame alike super-resolution approach, where the information in the different light field views is used to augment the spatial resolution of the whole light field. In particular, we show that coupling the multi-frame paradigma with a graph regularizer that enforces the light field structure permits to avoid the costly and challenging disparity estimation step. Our experiments show that the proposed method compares favorably to the state-of-the- art for light field super-resolution algorithms, both in terms of PSNR and visual quality
A Nonsmooth Graph-Based Approach to Light Field Super-Resolution
In this article we propose a new super-resolution algorithm tailored for light field cameras, which suffer by design from a limited spatial resolution. To do so, we cast the light field super-resolution problem into an optimization problem, where the particular structure of the light field data is captured by a nonsmooth graph-based regularizer, and all the light field views are super-resolved jointly. In our experiments, we show that the proposed method compares favorably to the state-of-the-art light field super-resolution algorithms in terms of PSNR and visual quality. In particular, the nonsmooth graph-based regularizer leads to sharper images while preserving fine details
A Compressive Multi-Mode Superresolution Display
Compressive displays are an emerging technology exploring the co-design of
new optical device configurations and compressive computation. Previously,
research has shown how to improve the dynamic range of displays and facilitate
high-quality light field or glasses-free 3D image synthesis. In this paper, we
introduce a new multi-mode compressive display architecture that supports
switching between 3D and high dynamic range (HDR) modes as well as a new
super-resolution mode. The proposed hardware consists of readily-available
components and is driven by a novel splitting algorithm that computes the pixel
states from a target high-resolution image. In effect, the display pixels
present a compressed representation of the target image that is perceived as a
single, high resolution image.Comment: Technical repor
A novel disparity-assisted block matching-based approach for super-resolution of light field images
Currently, available plenoptic imaging technology has limited resolution. That makes it challenging to use this technology in applications, where sharpness is essential, such as film industry. Previous attempts aimed at enhancing the spatial resolution of plenoptic light field (LF) images were based on block and patch matching inherited from classical image super-resolution, where multiple views were considered as separate frames. By contrast to these approaches, a novel super-resolution technique is proposed in this paper with a focus on exploiting estimated disparity information to reduce the matching area in the super-resolution process. We estimate the disparity information from the interpolated LR view point images (VPs). We denote our method as light field block matching super-resolution. We additionally combine our novel super-resolution method with directionally adaptive image interpolation from [1] to preserve sharpness of the high-resolution images. We prove a steady gain in the PSNR and SSIM quality of the super-resolved images for the resolution enhancement factor 8x8 as compared to the recent approaches and also to our previous work [2]
- …