647 research outputs found

    A Novel Metric Approach Evaluation For The Spatial Enhancement Of Pan-Sharpened Images

    Full text link
    Various and different methods can be used to produce high-resolution multispectral images from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS), mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of processing images fusion for many applications. Spatial and spectral qualities are the two important indexes that used to evaluate the quality of any fused image. However, the jury is still out of fused image's benefits if it compared with its original images. In addition, there is a lack of measures for assessing the objective quality of the spatial resolution for the fusion methods. So, an objective quality of the spatial resolution assessment for fusion images is required. Therefore, this paper describes a new approach proposed to estimate the spatial resolution improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.Comment: arXiv admin note: substantial text overlap with arXiv:1110.497

    Edge Preservation in Ikonos Multispectral and Panchromatic Imagery Pan-sharpening

    No full text
    International audienceIn Ikonos imagery, both multispectral (MS) and panchromatic (PAN) images are provided with different spatial and spectral resolutions. Multispectral classification detects object classes only according to the spectral property of the pixel. Panchromatic image segmentation enables the extraction of detailed objects, like road networks, that are useful in map updating in Geographical Information Systems (GIS), environmental inspection, transportation and urban planning, etc. Therefore, the fusion of a PAN image with MS images is a key issue in applications that require both high spatial and high spectral resolutions. The fused image provides higher classification accuracy. To extract, for example, urban road networks in pan-sharpened images, edge information from the PAN image is used to eliminate the misclassified objects. If the PAN image is not available, then an edge map is extracted from the pan-sharpened images, and therefore the quality of this map depends on the fusion process of PAN and MS images. In a pan-sharpening process, before fusing, the MS images are resampled to the same pixel sizes as the PAN images and this upsampling impacts subsequent processing. In this work, we demonstrate that the interpolation method, used to resample the MS images, is very important in preserving the edges in the pan-sharpened images

    Object-Based Greenhouse Mapping Using Very High Resolution Satellite Data and Landsat 8 Time Series

    Get PDF
    Greenhouse mapping through remote sensing has received extensive attention over the last decades. In this article, the innovative goal relies on mapping greenhouses through the combined use of very high resolution satellite data (WorldView-2) and Landsat 8 Operational Land Imager (OLI) time series within a context of an object-based image analysis (OBIA) and decision tree classification. Thus, WorldView-2 was mainly used to segment the study area focusing on individual greenhouses. Basic spectral information, spectral and vegetation indices, textural features, seasonal statistics and a spectral metric (Moment Distance Index, MDI) derived from Landsat 8 time series and/or WorldView-2 imagery were computed on previously segmented image objects. In order to test its temporal stability, the same approach was applied for two different years, 2014 and 2015. In both years, MDI was pointed out as the most important feature to detect greenhouses. Moreover, the threshold value of this spectral metric turned to be extremely stable for both Landsat 8 and WorldView-2 imagery. A simple decision tree always using the same threshold values for features from Landsat 8 time series and WorldView-2 was finally proposed. Overall accuracies of 93.0% and 93.3% and kappa coefficients of 0.856 and 0.861 were attained for 2014 and 2015 datasets, respectively

    Remote sensing image fusion via compressive sensing

    Get PDF
    In this paper, we propose a compressive sensing-based method to pan-sharpen the low-resolution multispectral (LRM) data, with the help of high-resolution panchromatic (HRP) data. In order to successfully implement the compressive sensing theory in pan-sharpening, two requirements should be satisfied: (i) forming a comprehensive dictionary in which the estimated coefficient vectors are sparse; and (ii) there is no correlation between the constructed dictionary and the measurement matrix. To fulfill these, we propose two novel strategies. The first is to construct a dictionary that is trained with patches across different image scales. Patches at different scales or equivalently multiscale patches provide texture atoms without requiring any external database or any prior atoms. The redundancy of the dictionary is removed through K-singular value decomposition (K-SVD). Second, we design an iterative l1-l2 minimization algorithm based on alternating direction method of multipliers (ADMM) to seek the sparse coefficient vectors. The proposed algorithm stacks missing high-resolution multispectral (HRM) data with the captured LRM data, so that the latter is used as a constraint for the estimation of the former during the process of seeking the representation coefficients. Three datasets are used to test the performance of the proposed method. A comparative study between the proposed method and several state-of-the-art ones shows its effectiveness in dealing with complex structures of remote sensing imagery

    Toward reduction of artifacts in fused images

    Get PDF
    Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT metho

    THEOS Geometric Image Quality Testing - Initial Findings

    Get PDF
    This report summarizes the initial outcome of the geometric quality testing of the panchromatic, pan-sharpening and multispectral THEOS images (level 1A and 2A) acquired over the JRC Maussane Test Site for the Common Agriculture Policy (CAP) Control with Remote Sensing (CwRS) Programme. Based on the limited K2, THEOS and DMCII sample images that were made available to us the THEOS PAN orthoimage can reach 2m accuracy provided that a dedicated rigorous model based on at least 9 well-defined, well-distributed ground control points (GCPs) of high accuracy (i.e. RMSEx,y < 0.90m) is applied; while the orthorectified THEOS 1B MS product accuracy can reach 6.8m, provided that a dedicated rigorous model based on at least 9 well-distributed GCPs of appropriate accuracy is appliedJRC.DG.G.3-Monitoring agricultural resource

    Target-adaptive CNN-based pansharpening

    Full text link
    We recently proposed a convolutional neural network (CNN) for remote sensing image pansharpening obtaining a significant performance gain over the state of the art. In this paper, we explore a number of architectural and training variations to this baseline, achieving further performance gains with a lightweight network which trains very fast. Leveraging on this latter property, we propose a target-adaptive usage modality which ensures a very good performance also in the presence of a mismatch w.r.t. the training set, and even across different sensors. The proposed method, published online as an off-the-shelf software tool, allows users to perform fast and high-quality CNN-based pansharpening of their own target images on general-purpose hardware
    corecore