1,408 research outputs found

    Super-resolving multiresolution images with band-independant geometry of multispectral pixels

    Get PDF
    A new resolution enhancement method is presented for multispectral and multi-resolution images, such as these provided by the Sentinel-2 satellites. Starting from the highest resolution bands, band-dependent information (reflectance) is separated from information that is common to all bands (geometry of scene elements). This model is then applied to unmix low-resolution bands, preserving their reflectance, while propagating band-independent information to preserve the sub-pixel details. A reference implementation is provided, with an application example for super-resolving Sentinel-2 data.Comment: Source code with a ready-to-use script for super-resolving Sentinel-2 data is available at http://nicolas.brodu.net/recherche/superres

    Land use/cover classification in the Brazilian Amazon using satellite images.

    Get PDF
    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation?based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi?resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical?based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data

    Fusion of Multisource Images for Update of Urban GIS

    Get PDF

    FUSION OF HYPERSPECTRAL AND PANCHROMATIC IMAGES USING SPECTRAL UNMIXING RESULTS

    Get PDF

    Multi-Decadal Changes in Mangrove Extent, Age and Species in the Red River Estuaries of Viet Nam

    Get PDF
    This research investigated the performance of four different machine learning supervised image classifiers: artificial neural network (ANN), decision tree (DT), random forest (RF), and support vector machine (SVM) using SPOT-7 and Sentinel-1 images to classify mangrove age and species in 2019 in a Red River estuary, typical of others found in northern Viet Nam. The four classifiers were chosen because they are considered to have high accuracy, however, their use in mangrove age and species classifications has thus far been limited. A time-series of Landsat images from 1975 to 2019 was used to map mangrove extent changes using the unsupervised classification method of iterative self-organizing data analysis technique (ISODATA) and a comparison with accuracy of K-means classification, which found that mangrove extent has increased, despite a fall in the 1980s, indicating the success of mangrove plantation and forest protection efforts by local people in the study area. To evaluate the supervised image classifiers, 183 in situ training plots were assessed, 70% of them were used to train the supervised algorithms, with 30% of them employed to validate the results. In order to improve mangrove species separations, Gram–Schmidt and principal component analysis image fusion techniques were applied to generate better quality images. All supervised and unsupervised (2019) results of mangrove age, species, and extent were mapped and accuracy was evaluated. Confusion matrices were calculated showing that the classified layers agreed with the ground-truth data where most producer and user accuracies were greater than 80%. The overall accuracy and Kappa coefficients (around 0.9) indicated that the image classifications were very good. The test showed that SVM was the most accurate, followed by DT, ANN, and RF in this case study. The changes in mangrove extent identified in this study and the methods tested for using remotely sensed data will be valuable to monitoring and evaluation assessments of mangrove plantation projects

    Enhancing spatial resolution of remotely sensed data for mapping freshwater environments

    Get PDF
    Freshwater environments are important for ecosystem services and biodiversity. These environments are subject to many natural and anthropogenic changes, which influence their quality; therefore, regular monitoring is required for their effective management. High biotic heterogeneity, elongated land/water interaction zones, and logistic difficulties with access make field based monitoring on a large scale expensive, inconsistent and often impractical. Remote sensing (RS) is an established mapping tool that overcomes these barriers. However, complex and heterogeneous vegetation and spectral variability due to water make freshwater environments challenging to map using remote sensing technology. Satellite images available for New Zealand were reviewed, in terms of cost, and spectral and spatial resolution. Particularly promising image data sets for freshwater mapping include the QuickBird and SPOT-5. However, for mapping freshwater environments a combination of images is required to obtain high spatial, spectral, radiometric, and temporal resolution. Data fusion (DF) is a framework of data processing tools and algorithms that combines images to improve spectral and spatial qualities. A range of DF techniques were reviewed and tested for performance using panchromatic and multispectral QB images of a semi-aquatic environment, on the southern shores of Lake Taupo, New Zealand. In order to discuss the mechanics of different DF techniques a classification consisting of three groups was used - (i) spatially-centric (ii) spectrally-centric and (iii) hybrid. Subtract resolution merge (SRM) is a hybrid technique and this research demonstrated that for a semi aquatic QuickBird image it out performed Brovey transformation (BT), principal component substitution (PCS), local mean and variance matching (LMVM), and optimised high pass filter addition (OHPFA). However some limitations were identified with SRM, which included the requirement for predetermined band weights, and the over-representation of the spatial edges in the NIR bands due to their high spectral variance. This research developed three modifications to the SRM technique that addressed these limitations. These were tested on QuickBird (QB), SPOT-5, and Vexcel aerial digital images, as well as a scanned coloured aerial photograph. A visual qualitative assessment and a range of spectral and spatial quantitative metrics were used to evaluate these modifications. These included spectral correlation and root mean squared error (RMSE), Sobel filter based spatial edges RMSE, and unsupervised classification. The first modification addressed the issue of predetermined spectral weights and explored two alternative regression methods (Least Absolute Deviation, and Ordinary Least Squares) to derive image-specific band weights for use in SRM. Both methods were found equally effective; however, OLS was preferred as it was more efficient in processing band weights compared to LAD. The second modification used a pixel block averaging function on high resolution panchromatic images to derive spatial edges for data fusion. This eliminated the need for spectral band weights, minimised spectral infidelity, and enabled the fusion of multi-platform data. The third modification addressed the issue of over-represented spatial edges by introducing a sophisticated contrast and luminance index to develop a new normalising function. This improved the spatial representation of the NIR band, which is particularly important for mapping vegetation. A combination of the second and third modification of SRM was effective in simultaneously minimising the overall spectral infidelity and undesired spatial errors for the NIR band of the fused image. This new method has been labelled Contrast and Luminance Normalised (CLN) data fusion, and has been demonstrated to make a significant contribution in fusing multi-platform, multi-sensor, multi-resolution, and multi-temporal data. This contributes to improvements in the classification and monitoring of fresh water environments using remote sensing
    corecore