13,539 research outputs found

    Digital Holography Data Compression

    Get PDF
    Digital holography processing is a research topic related to the development of novel visual immersive applications. The huge amount of information conveyed by a digital hologram and the different properties of holographic data with respect to conventional photographic data require a comprehension of the performances and limitations of current image and video standard techniques. This paper proposes an architecture for objective evaluation of the performances of the state-of-the-art compression techniques applied to digital holographic data

    Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image

    Full text link
    Image metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe, that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Comment: 13 pages, 11 figure

    RLFC: Random Access Light Field Compression using Key Views and Bounded Integer Encoding

    Full text link
    We present a new hierarchical compression scheme for encoding light field images (LFI) that is suitable for interactive rendering. Our method (RLFC) exploits redundancies in the light field images by constructing a tree structure. The top level (root) of the tree captures the common high-level details across the LFI, and other levels (children) of the tree capture specific low-level details of the LFI. Our decompressing algorithm corresponds to tree traversal operations and gathers the values stored at different levels of the tree. Furthermore, we use bounded integer sequence encoding which provides random access and fast hardware decoding for compressing the blocks of children of the tree. We have evaluated our method for 4D two-plane parameterized light fields. The compression rates vary from 0.08 - 2.5 bits per pixel (bpp), resulting in compression ratios of around 200:1 to 20:1 for a PSNR quality of 40 to 50 dB. The decompression times for decoding the blocks of LFI are 1 - 3 microseconds per channel on an NVIDIA GTX-960 and we can render new views with a resolution of 512X512 at 200 fps. Our overall scheme is simple to implement and involves only bit manipulations and integer arithmetic operations.Comment: Accepted for publication at Symposium on Interactive 3D Graphics and Games (I3D '19

    Towards a quality metric for dense light fields.

    Get PDF
    Light fields become a popular representation of three-dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light-fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics

    Light field image compression

    Get PDF
    Light field imaging based on a single-tier camera equipped with a micro-lens array has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require identifying adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, this chapter presents some of the most recent light field image coding solutions that have been investigated. After a brief review of the current state of the art in image coding formats for light field photography, an experimental study of the rate-distortion performance for different coding formats and architectures is presented. Then, aiming at enabling faster deployment of light field applications and services in the consumer market, a scalable light field coding solution that provides backward compatibility with legacy display devices (e.g., 2D, 3D stereo, and 3D multiview) is also presented. Furthermore, a light field coding scheme based on a sparse set of microimages and the associated blockwise disparity is also presented. This coding scheme is scalable with three layers such that the rendering can be performed with the sparse micro-image set, the reconstructed light field image, and the decoded light field image.info:eu-repo/semantics/acceptedVersio

    Holographic representation: Hologram plane vs. object plane

    Get PDF
    Digital holography allows the recording, storage and subsequent reconstruction of both amplitude and phase of the light field scattered by an object. This is accomplished by recording interference patterns that preserve the properties of the original object field essential for 3D visualization, the so-called holograms. Digital holography refers to the acquisition of holograms with a digital sensor, typically a CCD or a CMOS camera, and to the reconstruction of the 3D object field using numerical methods. In the current work, the different representations of digital holographic information in the hologram and in the object planes are studied. The coding performance of the different complex field representations, notably Amplitude-Phase and Real-Imaginary, in both the hologram plane and the object plane, is assessed using both computer generated and experimental holograms. The HEVC intra main coding profile is used for the compression of the different representations in both planes, either for experimental holograms or computer generated holograms. The HEVC intra compression in the object plane outperforms encoding in the hologram plane. Furthermore, encoding computer generated holograms in the object plane has a larger benefit than the same encoding over the experimental holograms. This difference was expected, since experimental holograms are affected by a larger negative influence of speckle noise, resulting in a loss of compression efficiency. This work emphasizes the possibility of holographic coding on the object plane, instead of the common encoding in the hologram plane approach. Moreover, this possibility allows direct visualization of the Object Plane Amplitude in a regular 2D display without any transformation methods. The complementary phase information can easily be used to render 3D features such as depth map, multi-view or even holographic interference patterns for further 3D visualization depending on the display technology.info:eu-repo/semantics/publishedVersio

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Comparing numerical error and visual quality in reconstructions from compressed digital holograms

    Get PDF
    Digital holography is a well-known technique for both sensing and displaying real-world three-dimensional objects. Compression of digital holograms has been studied extensively, and the errors introduced by lossy compression are routinely evaluated in a reconstruction domain. Mean-square error predominates in the evaluation of reconstruction quality. However, it is not known how well this metric corresponds to what a viewer would regard as perceived error, nor how consistently it functions across different holograms and different viewers. In this study, we evaluate how each of seventeen viewers compared the visual quality of compressed and uncompressed holograms' reconstructions. Holograms from five different three-dimensional objects were used in the study, captured using a phase-shift digital holography setup. We applied two different lossy compression techniques to the complex-valued hologram pixels: uniform quantization, and removal and quantization of the Fourier coefficients, and used seven different compression levels with each

    Investigating Epipolar Plane Image Representations for Objective Quality Evaluation of Light Field Images

    Get PDF
    International audienceWith the ongoing advances in Light Field(LF) technology, research in LF acquisition, compression, processing has gained momentum. This increased the need for objective quality evaluation of LF content. Many processing algorithms are still optimized against peak signal to noise ratio(PSNR). Lately, several attempts have been made to improve objective quality evaluation such as extending 2D metrics to 4D LF domain. However, there is still a great room for improvement. In this paper, we experiment with existing 2D image quality metrics on the Epipolar Plane Image representations of LF content to reveal characteristics of LF related distortions. We discuss the challenges and suggest possible directions towards a LF image quality evaluation on EPI representations
    corecore