16 research outputs found

    Capturing the plenoptic function in a swipe

    Get PDF

    Sampling and Reconstruction of Spatial Fields using Mobile Sensors

    Get PDF
    Spatial sampling is traditionally studied in a static setting where static sensors scattered around space take measurements of the spatial field at their locations. In this paper we study the emerging paradigm of sampling and reconstructing spatial fields using sensors that move through space. We show that mobile sensing offers some unique advantages over static sensing in sensing time-invariant bandlimited spatial fields. Since a moving sensor encounters such a spatial field along its path as a time-domain signal, a time-domain anti-aliasing filter can be employed prior to sampling the signal received at the sensor. Such a filtering procedure, when used by a configuration of sensors moving at constant speeds along equispaced parallel lines, leads to a complete suppression of spatial aliasing in the direction of motion of the sensors. We analytically quantify the advantage of using such a sampling scheme over a static sampling scheme by computing the reduction in sampling noise due to the filter. We also analyze the effects of non-uniform sensor speeds on the reconstruction accuracy. Using simulation examples we demonstrate the advantages of mobile sampling over static sampling in practical problems. We extend our analysis to sampling and reconstruction schemes for monitoring time-varying bandlimited fields using mobile sensors. We demonstrate that in some situations we require a lower density of sensors when using a mobile sensing scheme instead of the conventional static sensing scheme. The exact advantage is quantified for a problem of sampling and reconstructing an audio field.Comment: Submitted to IEEE Transactions on Signal Processing May 2012; revised Oct 201

    Shape from bandwidth: the 2-D orthogonal projection case

    Get PDF
    Could bandwidth—one of the most classic concepts in signal processing—have a new purpose? In this paper, we investigate the feasibility of using bandwidth to infer shape from a single image. As a first analysis, we limit our attention to orthographic projection and assume a 2-D world. We show that, under certain conditions, a single image of a surface, painted with a bandlimited texture, is enough to deduce the surface up to an equivalence class. This equivalence class is unavoidable, since it stems from surface transformations that are invisible to orthographic projections. A proof of concept algorithm is presented and tested with both a simulation and a simple practical experiment

    On the Accuracy of Point Localisation in a Circular Camera-Array

    Get PDF
    Although many advances have been made in light-field and camera-array image processing, there is still a lack of thorough analysis of the localisation accuracy of different multi-camera systems. By considering the problem from a frame-quantisation perspective, we are able to quantify the point localisation error of circular camera configurations. Specifically, we obtain closed form expressions bounding the localisation error in terms of the parameters describing the acquisition setup. These theoretical results are independent of the localisation algorithm and thus provide fundamental limits on performance. Furthermore, the new frame-quantisation perspective is general enough to be extended to more complex camera configurations

    Detecting Planar Surface Using a Light-Field Camera with Application to Distinguishing Real Scenes From Printed Photos

    Get PDF
    We propose a novel approach for detecting printed photos from natural scenes using a light-field camera. Our approach exploits the extra information captured by a light-field camera and the multiple views of scene in order to infer a compact feature vector from the variance in the distribution of the depth of the scene. We then use this feature for robust detection of printed photos. Our algorithm can be used in person-based authentication applications to avoid intruding the system using a facial photo. Our experiments show that the energy of the gradients of points in the epipolar domain is highly discriminative and can be used to distinguish printed photos from original scenes

    Scale-invariant representation of light field images for object recognition and tracking

    Get PDF
    Achieving perfect scale-invariance is usually not possible using classical color image features. This is mostly because of the fact that a traditional image is a two-dimensional projection of the real world. In contrast, light field imaging makes available rays from multiple view points and thus encodes depth and occlusion information which are very crucial for true scale-invariance. By studying and exploiting the information content of the light field signal and its very regular structure we came up with a provably efficient solution for extracting scale-invariance feature vector representation for more efficient light field matching and retrieval among various views. Our approach is based on a novel integral transform which maps the pixel intensities to a new space in which the effect of scaling can be easily canceled out by a simple integration. The experiments we conducted on various real and synthetic light field images verify that the performance of the proposed approach is promising in terms of both accuracy and time-complexity. As a probable future improvement, incorporating invariance to various other transforms such as rotation and translation will make the algorithm far more applicable

    Light field panorama by a plenoptic camera

    Get PDF
    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for fine and detailed light field acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light field stitching to increase the size of the acquired light field data. We consider a simplified plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light field acquisition and stitching under two different scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light fields should be resampled to increase the spatial range and ultimately obtain a wider field of view. In the case of camera translation and rotation, the camera motion is calculated such that the light fields can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light field applications such as registration and super-resolution

    Estimation of signal distortion using effective sampling density for light field-based free viewpoint video

    Get PDF
    In a light field-based free viewpoint video (LF-based FVV) system, effective sampling density (ESD) is defined as the number of rays per unit area of the scene that has been acquired and is selected in the rendering process for reconstructing an unknown ray. This paper extends the concept of ESD and shows that ESD is a tractable metric that quantifies the joint impact of the imperfections of LF acquisition and rendering. By deriving and analyzing ESD for the commonly used LF acquisition and rendering methods, it is shown that ESD is an effective indicator determined by system parameters and can be used to directly estimate output video distortion without access to the ground truth. This claim is verified by extensive numerical simulations and comparison to PSNR. Furthermore, an empirical relationship between the output distortion (in PSNR) and the calculated ESD is established to allow direct assessment of the overall video distortion without an actual implementation of the system. A small scale subjective user study is also conducted which indicates a correlation of 0.91 between ESD and perceived quality
    corecore