386 research outputs found

    An Image fusion algorithm for spatially enhancing spectral mixture maps

    Get PDF
    An image fusion algorithm, based upon spectral mixture analysis, is presented. The algorithm combines low spatial resolution multi/hyperspectral data with high spatial resolution sharpening image(s) to create high resolution material maps. Spectral (un)mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. The outputs of unmixing are endmember fraction images (material maps) at the spatial resolution of the multispectral system. This research includes developing an improved unmixing algorithm based upon stepwise regression. In the second stage of the process, the unmixing solution is sharpened with data from another sensor to generate high resolution material maps. Sharpening is implemented as a nonlinear optimization using the same type of model as unmixing. Quantifiable results are obtained through the use of synthetically generated imagery. Without synthetic images, a large amount of ground truth would be required in order to measure the accuracy of the material maps. Multiple band sharpening is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. The analysis includes an examination of the effects of constraints and texture variation on the material maps. The results show stepwise unmixing is an improvement over traditional unmixing algorithms. The results also indicate sharpening improves the material maps. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map

    Multispectral and Hyperspectral Image Fusion by MS/HS Fusion Net

    Full text link
    Hyperspectral imaging can help better understand the characteristics of different materials, compared with traditional image systems. However, only high-resolution multispectral (HrMS) and low-resolution hyperspectral (LrHS) images can generally be captured at video rate in practice. In this paper, we propose a model-based deep learning approach for merging an HrMS and LrHS images to generate a high-resolution hyperspectral (HrHS) image. In specific, we construct a novel MS/HS fusion model which takes the observation models of low-resolution images and the low-rankness knowledge along the spectral mode of HrHS image into consideration. Then we design an iterative algorithm to solve the model by exploiting the proximal gradient method. And then, by unfolding the designed algorithm, we construct a deep network, called MS/HS Fusion Net, with learning the proximal operators and model parameters by convolutional neural networks. Experimental results on simulated and real data substantiate the superiority of our method both visually and quantitatively as compared with state-of-the-art methods along this line of research.Comment: 10 pages, 7 figure

    Enhancing spatial resolution of remotely sensed data for mapping freshwater environments

    Get PDF
    Freshwater environments are important for ecosystem services and biodiversity. These environments are subject to many natural and anthropogenic changes, which influence their quality; therefore, regular monitoring is required for their effective management. High biotic heterogeneity, elongated land/water interaction zones, and logistic difficulties with access make field based monitoring on a large scale expensive, inconsistent and often impractical. Remote sensing (RS) is an established mapping tool that overcomes these barriers. However, complex and heterogeneous vegetation and spectral variability due to water make freshwater environments challenging to map using remote sensing technology. Satellite images available for New Zealand were reviewed, in terms of cost, and spectral and spatial resolution. Particularly promising image data sets for freshwater mapping include the QuickBird and SPOT-5. However, for mapping freshwater environments a combination of images is required to obtain high spatial, spectral, radiometric, and temporal resolution. Data fusion (DF) is a framework of data processing tools and algorithms that combines images to improve spectral and spatial qualities. A range of DF techniques were reviewed and tested for performance using panchromatic and multispectral QB images of a semi-aquatic environment, on the southern shores of Lake Taupo, New Zealand. In order to discuss the mechanics of different DF techniques a classification consisting of three groups was used - (i) spatially-centric (ii) spectrally-centric and (iii) hybrid. Subtract resolution merge (SRM) is a hybrid technique and this research demonstrated that for a semi aquatic QuickBird image it out performed Brovey transformation (BT), principal component substitution (PCS), local mean and variance matching (LMVM), and optimised high pass filter addition (OHPFA). However some limitations were identified with SRM, which included the requirement for predetermined band weights, and the over-representation of the spatial edges in the NIR bands due to their high spectral variance. This research developed three modifications to the SRM technique that addressed these limitations. These were tested on QuickBird (QB), SPOT-5, and Vexcel aerial digital images, as well as a scanned coloured aerial photograph. A visual qualitative assessment and a range of spectral and spatial quantitative metrics were used to evaluate these modifications. These included spectral correlation and root mean squared error (RMSE), Sobel filter based spatial edges RMSE, and unsupervised classification. The first modification addressed the issue of predetermined spectral weights and explored two alternative regression methods (Least Absolute Deviation, and Ordinary Least Squares) to derive image-specific band weights for use in SRM. Both methods were found equally effective; however, OLS was preferred as it was more efficient in processing band weights compared to LAD. The second modification used a pixel block averaging function on high resolution panchromatic images to derive spatial edges for data fusion. This eliminated the need for spectral band weights, minimised spectral infidelity, and enabled the fusion of multi-platform data. The third modification addressed the issue of over-represented spatial edges by introducing a sophisticated contrast and luminance index to develop a new normalising function. This improved the spatial representation of the NIR band, which is particularly important for mapping vegetation. A combination of the second and third modification of SRM was effective in simultaneously minimising the overall spectral infidelity and undesired spatial errors for the NIR band of the fused image. This new method has been labelled Contrast and Luminance Normalised (CLN) data fusion, and has been demonstrated to make a significant contribution in fusing multi-platform, multi-sensor, multi-resolution, and multi-temporal data. This contributes to improvements in the classification and monitoring of fresh water environments using remote sensing

    Evaluation of two applications of spectral mixing models to image fusion

    Get PDF
    None provided

    Advantages of nonlinear intensity components for contrast-based multispectral pansharpening

    Get PDF
    In this study, we investigate whether a nonlinear intensity component can be beneficial for multispectral (MS) pansharpening based on component-substitution (CS). In classical CS methods, the intensity component is a linear combination of the spectral components and lies on a hyperplane in the vector space that contains the MS pixel values. Starting from the hyperspherical color space (HCS) fusion technique, we devise a novel method, in which the intensity component lies on a hyper-ellipsoidal surface instead of on a hyperspherical surface. The proposed method is insensitive to the format of the data, either floating-point spectral radiance values or fixed-point packed digital numbers (DNs), thanks to the use of a multivariate linear regression between the squares of the interpolated MS bands and the squared lowpass filtered Pan. The regression of squared MS, instead of the Euclidean radius used by HCS, makes the intensity component no longer lie on a hypersphere in the vector space of the MS samples, but on a hyperellipsoid. Furthermore, before the fusion is accomplished, the interpolated MS bands are corrected for atmospheric haze, in order to build a multiplicative injection model with approximately de-hazed components. Experiments on GeoEye-1 and WorldView-3 images show consistent advantages over the baseline HCS and a performance slightly superior to those of some of the most advanced methodsPeer ReviewedPostprint (published version

    Assessing Satellite Image Data Fusion with Information Theory Metrics

    Full text link
    A common problem in remote sensing is estimating an image with high spatial and high spectral resolution given separate sources of measurements from satellite instruments, one having each of these desirable properties. This thesis presents a survey of seven families of algorithms which have been developed to provide this common pattern of satellite image data fusion. They are all tested on artificially degraded sets of satellite data from the Moderate Resolution Imaging Spectroradiometer (“MODIS”) with known ideal results, and evaluated using the commonly accepted data fusion assessment metrics spectral angle mapper (“SAM”) and Erreur Relative Globale Adimensionelle de Synth`ese (“ERGAS”). It is also established that the information theory metric mutual information can predict the performance of certain data fusion algorithms (pan-sharpening, principal component analysis (“PCA”) based, and high-pass filter (“HPF”) based) but not others

    Water bodies' mapping from Sentinel-2 imagery with Modified Normalized Difference Water Index at 10-m spatial resolution produced by sharpening the swir band

    Get PDF
    Monitoring open water bodies accurately is an important and basic application in remote sensing. Various water body mapping approaches have been developed to extract water bodies from multispectral images. The method based on the spectral water index, especially the Modified Normalized Difference Water Index (MDNWI) calculated from the green and Shortwave-Infrared (SWIR) bands, is one of the most popular methods. The recently launched Sentinel-2 satellite can provide fine spatial resolution multispectral images. This new dataset is potentially of important significance for regional water bodies' mapping, due to its free access and frequent revisit capabilities. It is noted that the green and SWIR bands of Sentinel-2 have different spatial resolutions of 10 m and 20 m, respectively. Straightforwardly, MNDWI can be produced from Sentinel-2 at the spatial resolution of 20 m, by upscaling the 10-m green band to 20 m correspondingly. This scheme, however, wastes the detailed information available at the 10-m resolution. In this paper, to take full advantage of the 10-m information provided by Sentinel-2 images, a novel 10-m spatial resolution MNDWI is produced from Sentinel-2 images by downscaling the 20-m resolution SWIR band to 10 m based on pan-sharpening. Four popular pan-sharpening algorithms, including Principle Component Analysis (PCA), Intensity Hue Saturation (IHS), High Pass Filter (HPF) and à Trous Wavelet Transform (ATWT), were applied in this study. The performance of the proposed method was assessed experimentally using a Sentinel-2 image located at the Venice coastland. In the experiment, six water indexes, including 10-m NDWI, 20-m MNDWI and 10-m MNDWI, produced by four pan-sharpening algorithms, were compared. Three levels of results, including the sharpened images, the produced MNDWI images and the finally mapped water bodies, were analysed quantitatively. The results showed that MNDWI can enhance water bodies and suppressbuilt-up features more efficiently than NDWI. Moreover, 10-m MNDWIs produced by all four pan-sharpening algorithms can represent more detailed spatial information of water bodies than 20-m MNDWI produced by the original image. Thus, MNDWIs at the 10-m resolution can extract more accurate water body maps than 10-m NDWI and 20-m MNDWI. In addition, although HPF can produce more accurate sharpened images and MNDWI images than the other three benchmark pan-sharpening algorithms, the ATWT algorithm leads to the best 10-m water bodies mapping results. This is no necessary positive connection between the accuracy of the sharpened MNDWI image and the map-level accuracy of the resultant water body maps
    corecore