48 research outputs found

    Ascension

    Get PDF
    https://digitalcommons.georgiasouthern.edu/ascen/1002/thumbnail.jp

    Ascension

    Get PDF
    https://digitalcommons.georgiasouthern.edu/ascen/1001/thumbnail.jp

    Radiometric calibration using temporal irradiance mixtures

    No full text
    We propose a new method for sampling camera response functions: temporally mixing two uncalibrated irradiances within a single camera exposure. Calibration methods rely on some known relationship between irradiance at the camera image plane and measured pixel intensities. Prior approaches use a color checker chart with known reflectances, registered images with different exposure ratios, or even the irradiance distribution along edges in images. We show that temporally blending irradiances allows us to densely sample the camera response function with known relative irradiances. Our first method computes the camera response curve using temporal mixtures of two pixel intensities on an uncalibrated computer display. The second approach makes use of temporal irradiance mixtures caused by motion blur. Both methods require only one input image, although more images can be used for improved robustness to noise or to cover more of the response curve. We show that our methods compute accurate response functions for a variety of cameras. 1

    Hardware-accelerated Dynamic Light Field Rendering

    No full text
    We present a system capable of interactively displaying a dynamic scene from novel viewpoints by warping and blending images recorded from multiple synchronized video cameras. It is tuned for streamed data and achieves 20 frames per second on modern consumer-class hardware when rendering a 3D movie from an arbitrary eye point within the convex hull of the recording camera's positions. The quality of the prediction largely depends on the accuracy of the disparity maps which are reconstructed off-line and provided together with the images. We generalize known algorithms for estimating disparities between two images to the case of multiple image streams, aiming at a minimization of warping artifacts and utilization of temporal coherence

    Stereo Reconstruction with Mixed Pixels Using Adaptive Over-Segmentation

    No full text
    We present an over-segmentation based, dense stereo algorithm that jointly estimates segmentation and depth. For mixed pixels on segment boundaries, the algorithm computes foreground opacity (alpha), as well as color and depth for the foreground and background. We model the scene as a collection of fronto-parallel planar segments in a reference view, and use a generative model for image formation that handles mixed pixels at segment boundaries. Our method iteratively updates the segmentation based on color, depth and shape constraints using MAP estimation. Given a segmentation, the depth estimates are updated using belief propagation. We show that our method is competitive with the state-of-the-art based on the new Middlebury stereo evaluation, and that it overcomes limitations of traditional segmentation based methods while properly handling mixed pixels. Z-keying results show the advantages of combining opacity and depth estimation. 1

    Abstract Hardware-accelerated Dynamic Light Field Rendering

    No full text
    We present a system capable of interactively displaying a dynamic scene from novel viewpoints by warping and blending images recorded from multiple synchronized video cameras. It is tuned for streamed data and achieves 20 frames per second on modern consumer-class hardware when rendering a 3D movie from an arbitrary eye point within the convex hull of the recording camera’s positions. The quality of the prediction largely depends on the accuracy of the disparity maps which are reconstructed off-line and provided together with the images. We generalize known algorithms for estimating disparities between two images to the case of multiple image streams, aiming at a minimization of warping artifacts and utilization of temporal coherence.

    Penrose pixels: Super-resolution in the detector layout domain

    No full text
    We present a novel approach to reconstruction based superresolution that explicitly models the detector’s pixel layout. Pixels in our model can vary in shape and size, and there may be gaps between adjacent pixels. Furthermore, their layout can be periodic as well as aperiodic, such as Penrose tiling or a biological retina. We also present a new variant of the well known error back-projection super-resolution algorithm that makes use of the exact detector model in its back projection operator for better accuracy. Our method can be applied equally well to either periodic or aperiodic pixel tiling. Through analysis and extensive testing using synthetic and real images, we show that our approach outperforms existing reconstruction based algorithms for regular pixel arrays. We obtain significantly better results using aperiodic pixel layouts. As an interesting example, we apply our method to a retina-like pixel structure modeled by a centroidal Voronoi tessellation. We demonstrate that, in principle, this structure is better for super-resolution than the regular pixel array used in today’s sensors.
    corecore