5 research outputs found

    A Pipeline for Lenslet Light Field Quality Enhancement

    Full text link
    In recent years, light fields have become a major research topic and their applications span across the entire spectrum of classical image processing. Among the different methods used to capture a light field are the lenslet cameras, such as those developed by Lytro. While these cameras give a lot of freedom to the user, they also create light field views that suffer from a number of artefacts. As a result, it is common to ignore a significant subset of these views when doing high-level light field processing. We propose a pipeline to process light field views, first with an enhanced processing of RAW images to extract subaperture images, then a colour correction process using a recent colour transfer algorithm, and finally a denoising process using a state of the art light field denoising approach. We show that our method improves the light field quality on many levels, by reducing ghosting artefacts and noise, as well as retrieving more accurate and homogeneous colours across the sub-aperture images.Comment: IEEE International Conference on Image Processing 2018, 5 pages, 7 figure

    PlenoptiCam v1.0: A light-field imaging framework

    Get PDF
    This is an accepted manuscript of an article published by IEEE in IEEE Transactions on Image Processing on 19/07/2021. Available online: https://doi.org/10.1109/TIP.2021.3095671 The accepted version of the publication may differ from the final published version.Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications. The key obstacle in composing light-fields from exposures taken by a plenoptic camera is to computationally calibrate, re-align and rearrange four-dimensional image data. Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras and improving the color consistency across viewpoints at the expense of high computational loads. The framework presented herein advances prior outcomes thanks to its cost-effective color equalization from parallax-invariant probability distribution transfers and a novel micro image scale-space analysis for generic camera calibration independent of the lens specifications. Our framework compensates for artifacts from the sensor and micro lens grid in an innovative way to enable superior quality in sub-aperture image extraction, computational refocusing and Scheimpflug rendering with sub-sampling capabilities. Benchmark comparisons using established image metrics suggest that our proposed pipeline outperforms state-of-the-art tool chains in the majority of cases. The algorithms described in this paper are released under an open-source license, offer cross-platform compatibility with few dependencies and a graphical user interface. This makes the reproduction of results and experimentation with plenoptic camera technology convenient for peer researchers, developers, photographers, data scientists and others working in this field

    Lossy Light Field Compression Using Modern Deep Learning and Domain Randomization Techniques

    Get PDF
    Lossy data compression is a particular type of informational encoding utilizing approximations in order to efficiently tradeoff accuracy in favour of smaller file sizes. The transmission and storage of images is a typical example of this in the modern digital world. However the reconstructed images often suffer from degradation and display observable visual artifacts. Convolutional Neural Networks have garnered much attention in all corners of Computer Vision, including the tasks of image compression and artifact reduction. We study how lossy compression can be extended to higher dimensional images with varying viewpoints, known as light fields. Domain Randomization is explored in detail, and used to generate the largest light field dataset we are aware of, to be used as training data. We formulate the task of compression under the frameworks of neural networks and calculate a quantization tensor for the 4-D Discrete Cosine Transform coefficients of the light fields. In order to accurately train the network, a high degree approximation to the rounding operation is introduced. In addition, we present a multi-resolution convolutional-based light field enhancer, producing average gains of 0.854 db in Peak Signal-to-Noise Ratio, and 0.0338 in Structural Similarity Index Measure over the base model, across a wide range of bitrates

    Lossy Light Field Compression Using Modern Deep Learning and Domain Randomization Techniques

    Get PDF
    Lossy data compression is a particular type of informational encoding utilizing approximations in order to efficiently tradeoff accuracy in favour of smaller file sizes. The transmission and storage of images is a typical example of this in the modern digital world. However the reconstructed images often suffer from degradation and display observable visual artifacts. Convolutional Neural Networks have garnered much attention in all corners of Computer Vision, including the tasks of image compression and artifact reduction. We study how lossy compression can be extended to higher dimensional images with varying viewpoints, known as light fields. Domain Randomization is explored in detail, and used to generate the largest light field dataset we are aware of, to be used as training data. We formulate the task of compression under the frameworks of neural networks and calculate a quantization tensor for the 4-D Discrete Cosine Transform coefficients of the light fields. In order to accurately train the network, a high degree approximation to the rounding operation is introduced. In addition, we present a multi-resolution convolutional-based light field enhancer, producing average gains of 0.854 db in Peak Signal-to-Noise Ratio, and 0.0338 in Structural Similarity Index Measure over the base model, across a wide range of bitrates
    corecore