151 research outputs found

    A NOVEL IHS-GA FUSION METHOD BASED ON ENHANCEMENT VEGETATED AREA

    Get PDF

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    A high-resolution index for vegetation extraction in IKONOS images

    Get PDF
    ISBN: 978-0-8194-8341-6 - WOSInternational audienceIn monitoring vegetation change and urban planning, the measure and the mapping of the green vegetation over the Earth play an important role. The normalized difference vegetation index (NDVI) is the most popular approach to generate vegetation maps for remote sensing imagery. Unfortunately, the NDVI generates low resolution vegetation maps. Highresolution imagery, such as IKONOS imagery, can be used to overcome this weakness leading to better classification accuracy. Hence, it is important to derive a vegetation index providing the high-resolution data. Various scientific researchers have proposed methods based on high-resolution vegetation indices. These methods use image fusion to generate high-resolution vegetation maps. IKONOS produces high-resolution panchromatic (Pan) images and low-resolution multispectral (MS) images. Generally, for the image fusion purpose, the conventional linear interpolation bicubic scheme is used to resize the low-resolution images. This scheme fails around edges and consequently produces blurred edges and annoying artefacts in interpolated images. This study presents a new index that provides high-resolution vegetation maps for IKONOS imagery. This vegetation index (HRNDVI: High Resolution NDVI) is based on a new derived formula including the high-resolution information. We use an artefact free image interpolation method to upsample the MS images so that they have the same size as that of the Pan images. The HRNDVI is then computed by using the resampled MS and the Pan images. The proposed vegetation index takes the advantage of the high spatial resolution information of Pan images to generate artefact free vegetation maps. Visual analysis demonstrates that this index is promising and performs well in vegetation extraction and visualisation

    Evaluation of Pan-Sharpening Techniques Using Lagrange Optimization

    Get PDF
    Earth’s observation satellites, such as IKONOS, provide simultaneously multispectral and panchromatic images. A multispectral image comes with a lower spatial and higher spectral resolution in contrast to a panchromatic image which usually has a high spatial and a low spectral resolution. Pan-sharpening represents a fusion of these two complementary images to provide an output image that has both spatial and spectral high resolutions. The objective of this paper is to propose a new method of pan-sharpening based on pixel-level image manipulation and to compare it with several state-of-art pansharpening methods using different evaluation criteria.  The paper presents an image fusion method based on pixel-level optimization using the Lagrange multiplier. Two cases are discussed: (a) the maximization of spectral consistency and (b) the minimization of the variance difference between the original data and the computed data. The paper compares the results of the proposed method with several state-of-the-art pan-sharpening methods. The performance of the pan-sharpening methods is evaluated qualitatively and quantitatively using evaluation criteria, such as the Chi-square test, RMSE, SNR, SD, ERGAS, and RASE. Overall, the proposed method is shown to outperform all the existing methods

    Model-based satellite image fusion

    Get PDF

    Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data Fusion Contest

    No full text
    International audienceIn January 2006, the Data Fusion Committee of the IEEE Geoscience and Remote Sensing Society launched a public contest for pansharpening algorithms, which aimed to identify the ones that perform best. Seven research groups worldwide participated in the contest, testing eight algorithms following different philosophies [component substitution, multiresolution analysis (MRA), detail injection, etc.]. Several complete data sets from two different sensors, namely, QuickBird and simulated Pléiades, were delivered to all participants. The fusion results were collected and evaluated, both visually and objectively. Quantitative results of pansharpening were possible owing to the availability of reference originals obtained either by simulating the data collected from the satellite sensor by means of higher resolution data from an airborne platform, in the case of the Pléiades data, or by first degrading all the available data to a coarser resolution and saving the original as the reference, in the case of the QuickBird data. The evaluation results were presented during the special session on Data Fusion at the 2006 International Geoscience and Remote Sensing Symposium in Denver, and these are discussed in further detail in this paper. Two algorithms outperform all the others, the visual analysis being confirmed by the quantitative evaluation. These two methods share the same philosophy: they basically rely on MRA and employ adaptive models for the injection of high-pass details

    A directed search algorithm for setting the spectral-spatial quality trade-off of fused images by the wavelet à trous method

    Full text link
    This paper proposes a method to determine, in an objective and accurate way, the weighting factor (alfa) to be applied to the detailed panchromatic image information that will be integrated with the background multispectral image information to obtain the "best"; fused image with the same spatial and spectral quality. The fusion method is a weighting variant of the fusion algorithm based on the wavelet transform, calculated using the à trous (WAT) algorithm. The "alfa"; factor is determined, for each band of the multispectral source images using the simulated annealing (SA) search algorithm, which optimizes an objective function (OF) associated with both spatial and spectral quality measures for the fused images. The results obtained have demonstrated that for each one of the spectral bands there is an "alfa"; value that provides fused images with the optimal trade-off between the two qualities for any decomposition level value (n) of the wavelet transform
    corecore