6,145 research outputs found

    Application of Generalized Partial Volume Estimation for Mutual Information based Registration of High Resolution SAR and Optical Imagery

    Get PDF
    Mutual information (MI) has proven its effectiveness for automated multimodal image registration for numerous remote sensing applications like image fusion. We analyze MI performance with respect to joint histogram bin size and the employed joint histogramming technique. The affect of generalized partial volume estimation (GPVE) utilizing B-spline kernels with different histogram bin sizes on MI performance has been thoroughly explored for registration of high resolution SAR (TerraSAR-X) and optical (IKONOS-2) satellite images. Our experiments highlight possibility of an inconsistent MI behavior with different joint histogram bin size which gets reduced with an increase in order of B-spline kernel employed in GPVE. In general, bin size reduction and/or increasing B-spline order have a smoothing affect on MI surfaces and even the lowest order B-spline with a suitable histogram bin size can achieve same pixel level accuracy as achieved by the higher order kernels more consistently

    An Improved Observation Model for Super-Resolution under Affine Motion

    Full text link
    Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher-resolution images. We propose an original observation model devoted to the case of non isometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for non isometric motion. Then, we propose an extension of the observation model by Elad and Feuer adapted to affine motion. This model is based on a decomposition of affine transforms into successive shear transforms, each one efficiently implemented by row-by-row or column-by-column 1-D affine transforms. We demonstrate on synthetic and real sequences that our observation model incorporated in a SR reconstruction technique leads to better results in the case of variable scale motions and it provides equivalent results in the case of isometric motions

    Super-resolution in turbulent videos: making profit from damage

    Full text link
    It is shown that one can make use of local instabilities in turbulent video frames to enhance image resolution beyond the limit defined by the image sampling rate. The paper outlines the processing algorithm, presents its experimental verification on simulated and real-life videos and discusses its potentials and limitations.Comment: 11 pages, 2 figures. Submitted to Optics Letters, 10-07-0

    Mesh-based video coding for low bit-rate communications

    Get PDF
    In this paper, a new method for low bit-rate content-adaptive mesh-based video coding is proposed. Intra-frame coding of this method employs feature map extraction for node distribution at specific threshold levels to achieve higher density placement of initial nodes for regions that contain high frequency features and conversely sparse placement of initial nodes for smooth regions. Insignificant nodes are largely removed using a subsequent node elimination scheme. The Hilbert scan is then applied before quantization and entropy coding to reduce amount of transmitted information. For moving images, both node position and color parameters of only a subset of nodes may change from frame to frame. It is sufficient to transmit only these changed parameters. The proposed method is well-suited for video coding at very low bit rates, as processing results demonstrate that it provides good subjective and objective image quality at a lower number of required bits

    Light Field Super-Resolution Via Graph-Based Regularization

    Full text link
    Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus, to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution, which should therefore be augmented by computational methods. On the one hand, off-the-shelf single-frame and multi-frame super-resolution algorithms are not ideal for light field data, as they do not consider its particular structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to estimate an explicit disparity map at each view. In this work we propose a new light field super-resolution algorithm meant to address these limitations. We adopt a multi-frame alike super-resolution approach, where the complementary information in the different light field views is used to augment the spatial resolution of the whole light field. We show that coupling the multi-frame approach with a graph regularizer, that enforces the light field structure via nonlocal self similarities, permits to avoid the costly and challenging disparity estimation step for all the views. Extensive experiments show that the new algorithm compares favorably to the other state-of-the-art methods for light field super-resolution, both in terms of PSNR and visual quality.Comment: This new version includes more material. In particular, we added: a new section on the computational complexity of the proposed algorithm, experimental comparisons with a CNN-based super-resolution algorithm, and new experiments on a third datase
    • …
    corecore