3 research outputs found

    Video and image bayesian demosaicing with a two color image prior

    No full text
    Abstract. The demosaicing process converts single-CCD color representations of one color channel per pixel into full per-pixel RGB. We introduce a Bayesian technique for demosaicing Bayer color filter array patterns that is based on a statistically-obtained two color per-pixel image prior. By modeling all local color behavior as a linear combination of two fully specified RGB triples, we avoid color fringing artifacts while preserving sharp edges. Our grid-less, floating-point pixel location architecture can process both single images and multiple images from video within the same framework, with multiple images providing denser color samples and therefore better color reproduction with reduced aliasing. An initial clustering is performed to determine the underlying local two color model surrounding each pixel. Using a product of Gaussians statistical model, the underlying linear blending ratio of the two representative colors at each pixel is estimated, while simultaneously providing noise reduction. Finally, we show that by sampling the image model at a finer resolution than the source images during reconstruction, our continuous demosaicing technique can super-resolve in a single step.

    Computational Video Enhancement

    Get PDF
    During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I introduce methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone-mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low-light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video resampling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user-specified characteristics. Each resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video's aliasing characteristics
    corecore