39,462 research outputs found
An Efficient Refocusing Scheme for Camera-Array Captured Light Field Video for Improved Visual Immersiveness
Light field video technology attempts to acquire human-like visual data, offering unprecedented immersiveness and a viable path for producing high-quality VR content. Refocusing that is one of the key properties of light field and a must for mixed reality applications has shown to work well for microlens based cameras, but as light field videos acquired by camera arrays have a low angular resolution, the refocused quality suffers. In this paper, we present an approach to improve the visual quality of refocused content captured by a camera array-based setup. Increasing the angular resolution using existing deep learning-based view synthesis method and refocusing the video using shift and sum refocusing algorithm produces over blurring of the in-focus region. Our enhancement method targets these blurry pixels and improves their quality by similarity detection and blending. Experimental results show that the proposed approach achieves better refocusing quality compared to traditional methods
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
Light field imaging extends the traditional photography by capturing both
spatial and angular distribution of light, which enables new capabilities,
including post-capture refocusing, post-capture aperture control, and depth
estimation from a single shot. Micro-lens array (MLA) based light field cameras
offer a cost-effective approach to capture light field. A major drawback of MLA
based light field cameras is low spatial resolution, which is due to the fact
that a single image sensor is shared to capture both spatial and angular
information. In this paper, we present a learning based light field enhancement
approach. Both spatial and angular resolution of captured light field is
enhanced using convolutional neural networks. The proposed method is tested
with real light field data captured with a Lytro light field camera, clearly
demonstrating spatial and angular resolution improvement
Light Field Denoising via Anisotropic Parallax Analysis in a CNN Framework
Light field (LF) cameras provide perspective information of scenes by taking
directional measurements of the focusing light rays. The raw outputs are
usually dark with additive camera noise, which impedes subsequent processing
and applications. We propose a novel LF denoising framework based on
anisotropic parallax analysis (APA). Two convolutional neural networks are
jointly designed for the task: first, the structural parallax synthesis network
predicts the parallax details for the entire LF based on a set of anisotropic
parallax features. These novel features can efficiently capture the high
frequency perspective components of a LF from noisy observations. Second, the
view-dependent detail compensation network restores non-Lambertian variation to
each LF view by involving view-specific spatial energies. Extensive experiments
show that the proposed APA LF denoiser provides a much better denoising
performance than state-of-the-art methods in terms of visual quality and in
preservation of parallax details
- …