66 research outputs found

    Hot spots in density fingering of exothermic autocatalytic chemical fronts

    Get PDF
    A light field is commonly described by a two-plane representation with four dimensions. Refocused three-dimensional contents can be rendered from light field images. A method for capturing these images is by using cameras with microlens arrays. A dense sampling of the light field results in large amounts of redundant data. Therefore, an efficient compression is vital for a practical use of these data. In this paper, we propose a displacement intra prediction scheme with a maximum of two hypotheses for the compression of plenoptic contents from focused plenoptic cameras. The proposed scheme is further implemented into HEVC. The work is aiming at coding plenoptic captured contents efficiently without knowing underlying camera geometries. In addition, the theoretical analysis of the displacement intra prediction for plenoptic images is explained; the relationship between the compressed captured images and their rendered quality is also analyzed. Evaluation results show that plenoptic contents can be efficiently compressed by the proposed scheme. Bit rate reduction up to 60 percent over HEVC is obtained for plenoptic images, and more than 30 percent is achieved for the tested video sequences

    Coding of Focused Plenoptic Contents by Displacement Intra Prediction

    Full text link

    MCPNS: A Macropixel Collocated Position and Its Neighbors Search for Plenoptic 2.0 Video Coding

    Full text link
    Recently, it was demonstrated that a newly focused plenoptic 2.0 camera can capture much higher spatial resolution owing to its effective light field sampling, as compared to a traditional unfocused plenoptic 1.0 camera. However, due to the nature difference of the optical structure between the plenoptic 1.0 and 2.0 cameras, the existing fast motion estimation (ME) method for plenoptic 1.0 videos is expected to be sub-optimal for encoding plenoptic 2.0 videos. In this paper, we point out the main motion characteristic differences between plenoptic 1.0 and 2.0 videos and then propose a new fast ME, called macropixel collocated position and its neighbors search (MCPNS) for plenoptic 2.0 videos. In detail, we propose to reduce the number of macropixel collocated position (MCP) search candidates based on the new observation of center-biased motion vector distribution at macropixel resolution. After that, due to large motion deviation behavior around each MCP location in plenoptic 2.0 videos, we propose to select a certain number of key MCP locations with the lowest matching cost to perform the neighbors MCP search to improve the motion search accuracy. Different from existing methods, our method can achieve better performance without requiring prior knowledge of microlens array orientations. Our simulation results confirmed the effectiveness of the proposed algorithm in terms of both bitrate savings and computational costs compared to existing methods.Comment: Under revie

    Light field image compression

    Get PDF
    Light field imaging based on a single-tier camera equipped with a micro-lens array has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require identifying adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, this chapter presents some of the most recent light field image coding solutions that have been investigated. After a brief review of the current state of the art in image coding formats for light field photography, an experimental study of the rate-distortion performance for different coding formats and architectures is presented. Then, aiming at enabling faster deployment of light field applications and services in the consumer market, a scalable light field coding solution that provides backward compatibility with legacy display devices (e.g., 2D, 3D stereo, and 3D multiview) is also presented. Furthermore, a light field coding scheme based on a sparse set of microimages and the associated blockwise disparity is also presented. This coding scheme is scalable with three layers such that the rendering can be performed with the sparse micro-image set, the reconstructed light field image, and the decoded light field image.info:eu-repo/semantics/acceptedVersio

    Light field image coding with jointly estimated self-similarity bi-prediction

    Get PDF
    This paper proposes an efficient light field image coding (LFC) solution based on High Efficiency Video Coding (HEVC) and a novel Bi-prediction Self-Similarity (Bi-SS) estimation and compensation approach to efficiently explore the inherent non-local spatial correlation of this type of content, where two predictor blocks are jointly estimated from the same search window by using a locally optimal rate constrained algorithm. Moreover, a theoretical analysis of the proposed Bi-SS prediction is also presented, which shows that other non-local spatial prediction schemes proposed in literature are suboptimal in terms of Rate-Distortion (RD) performance and, for this reason, can be considered as restricted cases of the jointly estimated Bi-SS solution proposed here. These theoretical insights are shown to be consistent with the presented experimental results, and demonstrate that the proposed LFC scheme is able to outperform the benchmark solutions with significant gains with respect to HEVC (with up to 61.1% of bit savings) and other state-of-the-art LFC solutions in the literature (with up 16.9% of bit savings).info:eu-repo/semantics/acceptedVersio

    Improved inter-layer prediction for Light field content coding with display scalability

    Get PDF
    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.info:eu-repo/semantics/acceptedVersio

    Light field image coding using high order prediction training

    Get PDF
    This paper proposes a new method for light field image coding relying on a high order prediction mode based on a training algorithm. The proposed approach is applied as an Intra prediction method based on a two-stage block-wise high order prediction model that supports geometric transformations up to eight degrees of freedom. Light field images comprise an array of micro-images that are related by complex perspective deformations that cannot be efficiently compensated by state-of-the-art image coding techniques, which are usually based on low order translational prediction models. The proposed prediction mode is able to exploit the non-local spatial redundancy introduced by light field image structure and a training algorithm is applied on different micro-images that are available in the reference region aiming at reducing the amount of signaling data sent to the receiver. The training direction that generates the most efficient geometric transformation for the current block is determined in the encoder side and signaled to the decoder using an index. The decoder is therefore able to repeat the high order prediction training to generate the desired geometric transformation. Experimental results show bitrate savings up to 12.57% and 50.03% relatively to a light field image coding solution based on low order prediction without training and HEVC, respectively.info:eu-repo/semantics/acceptedVersio

    Dense light field coding: a survey

    Get PDF
    Light Field (LF) imaging is a promising solution for providing more immersive and closer to reality multimedia experiences to end-users with unprecedented creative freedom and flexibility for applications in different areas, such as virtual and augmented reality. Due to the recent technological advances in optics, sensor manufacturing and available transmission bandwidth, as well as the investment of many tech giants in this area, it is expected that soon many LF transmission systems will be available to both consumers and professionals. Recognizing this, novel standardization initiatives have recently emerged in both the Joint Photographic Experts Group (JPEG) and the Moving Picture Experts Group (MPEG), triggering the discussion on the deployment of LF coding solutions to efficiently handle the massive amount of data involved in such systems. Since then, the topic of LF content coding has become a booming research area, attracting the attention of many researchers worldwide. In this context, this paper provides a comprehensive survey of the most relevant LF coding solutions proposed in the literature, focusing on angularly dense LFs. Special attention is placed on a thorough description of the different LF coding methods and on the main concepts related to this relevant area. Moreover, comprehensive insights are presented into open research challenges and future research directions for LF coding.info:eu-repo/semantics/publishedVersio

    Weighted bi-prediction for light field image coding

    Get PDF
    Light field imaging based on a single-tier camera equipped with a microlens array – also known as integral, holoscopic, and plenoptic imaging – has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.info:eu-repo/semantics/acceptedVersio
    corecore