749 research outputs found

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    Quality assessment by region in spot images fused by means dual-tree complex wavelet transform

    Get PDF
    This work is motivated in providing and evaluating a fusion algorithm of remotely sensed images, i.e. the fusion of a high spatial resolution panchromatic image with a multi-spectral image (also known as pansharpening) using the dual-tree complex wavelet transform (DT-CWT), an effective approach for conducting an analytic and oversampled wavelet transform to reduce aliasing, and in turn reduce shift dependence of the wavelet transform. The proposed scheme includes the definition of a model to establish how information will be extracted from the PAN band and how that information will be injected into the MS bands with low spatial resolution. The approach was applied to Spot 5 images where there are bands falling outside PAN’s spectrum. We propose an optional step in the quality evaluation protocol, which is to study the quality of the merger by regions, where each region represents a specific feature of the image. The results show that DT-CWT based approach offers good spatial quality while retaining the spectral information of original images, case SPOT 5. The additional step facilitates the identification of the most affected regions by the fusion process

    Influence of source images spatial characteristics on the global quality of fused images

    Full text link
    techniques to perform remote sensed image fusion are based on multiresolution analysis. This kind of images analysis requires the decomposition of the image at differente scales or levels, depending the fusion results on this level. Then, the two main objectives of this work are: to investigate the influence of the source images spatial characteristics on the decomposition level that the process fusion should be performed in; and to show how depends the spatial-spectral quality of fused images on this decomposition level. To carry out this study, the image fusion methodology that has been applied is based on the Wavelet transform, calculated by the à trous algorithm. The quality of the fused images has been evaluated by the ERGAS indices, as well as, the spectral correlation, the spatial correlation (Zhou’s index) and a global index (Q4). This methodology has been applied to fuse several multispectral and panchromatic images registered by the corresponding sensors on board the Landsat, Ikonos, and Quickbird satellites. It has been demonstrated that, in the majority of the cases, a low number of decompositions provides fused images with a high spatial and spectral quality trade-off. Additionally, the results indicate that the decomposition level that provides the best spatial-spectral quality trade-off depends on the spatial frequencies content of the source images

    Advances in Multi-Sensor Data Fusion: Algorithms and Applications

    Get PDF
    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme

    A hybrid pan-sharpening approach using maximum local extrema

    Get PDF
    Mixing or combining different elements for getting enhanced version, is practiced across various areas in real life. Pan-sharpening is a similar technique used in the digital world; a process to combine two images into a fused image that comprises more detailed information. Images referred herein are Panchromatic (PAN) and Multispectral (MS) images. This paper presents a pansharpening algorithm which integrates multispectral and panchromatic images to generate an improved multispectral image. This technique merges the Discrete wavelet transform (WT) and Intensity-Hue-Saturation (IHS) through separate fusing criterion for choosing an approximate and detail sub-images. Whereas the maximal local extrema are used for merging detail sub-images and finally merged high-resolution image is reconstructed through inverse transform of wavelet and IHS. The proposed fusion approach enhances the superiority of the resultant fused image is demonstrated by quality measures like CORR, RMSE, PFE, SSIM, SNR and PSNR with the help of satellite Worldview-II images. The proposed algorithm is correlated with the other fusion techniques through empirical outcomes proves the superiority of the final merged image in terms of resolutions than the others
    corecore