9 research outputs found

    Image resolution enhancement using dual-tree complex wavelet transform

    Get PDF
    In this letter, a complex wavelet-domain image resolution enhancement algorithm based on the estimation of wavelet coefficients is proposed. The method uses a forward and inverse dual-tree complex wavelet transform (DT-CWT) to construct a high-resolution (HR) image from the given low-resolution (LR) image. The HR image is reconstructed from the LR image, together with a set of wavelet coefficients, using the inverse DT-CWT. The set of wavelet coefficients is estimated from the DT-CWT decomposition of the rough estimation of the HR image. Results are presented and discussed on very HR QuickBird data, through comparisons between state-of-the-art resolution enhancement methods

    A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images

    Full text link
    Various and different methods can be used to produce high-resolution multispectral images from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS), mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of processing images fusion for many applications. Spatial and spectral qualities are the two important indexes that used to evaluate the quality of any fused image. However, the jury is still out of fused image's benefits if it compared with its original images. In addition, there is a lack of measures for assessing the objective quality of the spatial resolution for the fusion methods. So, an objective quality of the spatial resolution assessment for fusion images is required. Therefore, this paper describes a new approach proposed to estimate the spatial resolution improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497

    An Eigenpoint Based Multiscale Method for Validating Quantitative Remote Sensing Products

    Get PDF
    This letter first proposes the eigenpoint concept for quantitative remote sensing products (QRSPs) after discussing the eigenhomogeneity and eigenaccuracy for land surface variables. The eigenpoints are located according to the á trous wavelet planes of the QRSP. Based on these concepts, this letter proposes an eigenpoint based multiscale method for validating the QRSPs. The basic idea is that the QRSPs at coarse scales are validated by validating their eigenpoints using the QRSP at fine scale. The QRSP at fine scale is finally validated using observation data at the ground based eigenpoints at instrument scale. The ground based eigenpoints derived from the forecasted QRSP can be used as the observation positions when the satellites pass by the studied area. Experimental results demonstrate that the proposed method is manpower-and time-saving compared with the ideal scanning method and it is satisfying to perform simultaneous observation at these eigenpoints in terms of efficiency and accuracy

    Exploration of Data Fusion between Polarimetric Radar and Multispectral Image Data

    Get PDF
    Typically, analysis of remote sensing data is limited to one sensor at a time which usually contains data from the same general portion of the electromagnetic spectrum. SAR and visible near infrared data of Monterey, CA, were analyzed and fused with the goal of achieving improved land classification results. A common SAR decomposition, the Pauli decomposition was performed and inspected. The SAR Pauli decomposition and the multispectral reflectance data were fused at the pixel level, then analyzed using multispectral classification techniques. The results were compared to the multispectral classifications using the SAR decomposition results for a basis of interpreting the changes. The combined dataset resulted in little to no quantitative improvement in land cover classification capability, however inspection of the classification maps indicated an improved classification ability with the combined data. The most noticeable increases in classification accuracy occurred in spatial regions where the land features were parallel to the SAR flight line. This dependence on orientation makes this fusion process more ideal for datasets with more consistent features throughout the scene.http://archive.org/details/explorationofdat1094517375Civilian, Department of the NavyApproved for public release; distribution is unlimited

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Landsat ETM+ and SAR Image Fusion Based on Generalized Intensity Modulation

    No full text
    This paper presents a novel multisensor image fusion algorithm, which extends panchromatic sharpening of multi-spectral (MS) data through intensity modulation to the integration of MS and synthetic aperture radar (SAR) imagery. The method relies on SAR texture, extracted by ratioing the despeckled SAR image to its low-pass approximation. SAR texture is used to modulate the generalized intensity (GI) of the MS image, which is given by a linear transform extending intensity–hue–saturation transform to an arbitrary number of bands. Before modulation, the GI is enhanced by injection of high-pass details extracted from the available panchromatic image by means of the “à-trous” wavelet decomposition. The texture-modulated panchromatic-sharpened GI replaces the GI calculated from the resampled original MS data. Then, the inverse transform is applied to obtain the fusion product. Experimental results are presented on Landsat-7 Enhanced Thematic Mapper Plus and European Remote Sensing 2 satellite images of an urban area. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, which can be usefully applied for both visual analysis and classification purposes
    corecore