4 research outputs found

    On the Information Rates of the Plenoptic Function

    Get PDF
    The {\it plenoptic function} (Adelson and Bergen, 91) describes the visual information available to an observer at any point in space and time. Samples of the plenoptic function (POF) are seen in video and in general visual content, and represent large amounts of information. In this paper we propose a stochastic model to study the compression limits of the plenoptic function. In the proposed framework, we isolate the two fundamental sources of information in the POF: the one representing the camera motion and the other representing the information complexity of the "reality" being acquired and transmitted. The sources of information are combined, generating a stochastic process that we study in detail. We first propose a model for ensembles of realities that do not change over time. The proposed model is simple in that it enables us to derive precise coding bounds in the information-theoretic sense that are sharp in a number of cases of practical interest. For this simple case of static realities and camera motion, our results indicate that coding practice is in accordance with optimal coding from an information-theoretic standpoint. The model is further extended to account for visual realities that change over time. We derive bounds on the lossless and lossy information rates for this dynamic reality model, stating conditions under which the bounds are tight. Examples with synthetic sources suggest that in the presence of scene dynamics, simple hybrid coding using motion/displacement estimation with DPCM performs considerably suboptimally relative to the true rate-distortion bound.Comment: submitted to IEEE Transactions in Information Theor

    Capturing the plenoptic function in a swipe

    Get PDF

    A Stochastic Model for Video and its Information Rates

    Get PDF
    We propose a stochastic model for video and compute its information rates. The model has two sources of information representing ensembles of camera motion and visual scene data (i.e. "realities"). The sources of information are combined generating a vector process that we study in detail. Both lossless and lossy information rates are derived. The model is further extended to account for realities that change over time. We derive bounds on the lossless and lossy information rates for this dynamic reality model, stating conditions under which the bounds are tight. Experiments with synthetic sources suggest that in the presence of scene motion, simple hybrid coding using motion estimation with DPCM can be suboptimal relative to the true rate-distortion bound

    On the information rates of the plenoptic function

    No full text
    We study the compression problem of visual scenes acquired with a camera for transmission or storage. Our proposed model is general and includes two well-known cases: that of video coding and that of lightfield rendering data compression. Those two examples are related in that both are characterized by two sources of complexity: the camera motion, and the scenes being acquired. The main difference of the two is in how the complexity of the camera is coded. We propose a simplified model which includes the main characteristics of the general problem. Based on this, we do theoretical analysis, develop simple codes, and show experimental results
    corecore