10,956 research outputs found

    special section guest editorial airborne hyperspectral remote sensing of urban environments

    Get PDF
    University of Pavia, Department of Electrical, Computer and Biomedical Engineering, ItalyRemote sensing is a very useful tool in retrieving urban information in a timely, detailed, andcost-effective manner to assist various planning and management activities. Hyperspectralremote sensing has been of great interest to the scientific community since its emergence inthe 1980s, due to its very high spectral resolution providing the potential of finer material detec-tion, classification, identification, and quantification, compared to the traditional multispectralremote sensing. With the advance of computing facilities and more airborne high-spatial-reso-lution hyperspectral image data becoming available, many investigations on its real applicationsare taking place. In particular, urban environments are characterized by heterogeneous surfacecovers with significant spatial and spectral variations, and airborne hyperspectral imagery withhigh spatial and spectral resolutions offers an effective tool to analyze complex urban scenes.The objectiveof this special section of the Journal of Applied Remote Sensing is to provide asnapshot of status, potentials, and challenges of high-spatial-resolution hyperspectral imagery inurban feature extraction and land use interpretation in support of urban monitoring and man-agement decisions. This section includes twelve papers that cover four major topics: urban landuse and land cover classification, impervious surface mapping, built-up land analysis, and urbansurface water mapping.There are nine papers about urban land use and land cover classification. "Hyperspectralimage classification with improved local-region filters" by Ran et al. proposes two local-regionfilters, i.e., spatial adaptive weighted filter and collaborative-representation-based filter, for spa-tial feature extraction, thereby improving classification of urban hyperspectral imagery. "Edge-constrained Markov random field classification by integrating hyperspectral image with LiDARdata over urban areas" by Ni et al. adopts an edge-constrained Markov random field method foraccurate land cover classification over urban areas with hyperspectral image and LiDAR data."Combining data mining algorithm and object-based image analysis for detailed urban mappingof hyperspectral images" by Hamedianfar et al. explores the combined performance of a datamining algorithm and object-based image analysis, which can produce high accuracy of urbansurfacemapping."Dynamicclassifierselectionusingspectral-spatial information forhyperspec-tralimageclassification"bySuetal.proposestheintegrationofspectralfeatureswithvolumetrictextural features to improve the classification performance for urban hyperspectral images."Representation-based classifications with Markov random field model for hyperspectralurban data" by Xiong et al. improves representation-based classification by considering spa-tial-contextualinformationderivedfromaMarkovrandomfield."Classificationofhyperspectralurban data using adaptivesimultaneous orthogonal matching pursuit" by Zou et al. improves theclassification performance of a joint sparsity model, i.e., simultaneous orthogonal matching pur-suit, by using a priori segmentation map.Othertechniques,suchaslinearunmixinganddimensionalityreduction,arealsoinvestigatedin conjunction with urban surface mapping.Among the nine papersonclassification,twopapersconsider linear unmixing, which are "Unsupervised classification strategy utilizing an endmem-ber extraction technique for airborne hyperspectral remotely sensed imagery" by Xu et al., and"Endmembernumberestimationforhyperspectralimagerybasedonvertexcomponentanalysis"by Liu et al. One paper studies the impact of dimensionality reduction (through band selection)on classification accuracy, which is "Ant colony optimization-based supervised and unsuper-vised band selections for hyperspectral urban data classification" by Gao et al

    Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones

    Get PDF
    This paper proposes a novel framework for fusing multi-temporal, multispectral satellite images and OpenStreetMap (OSM) data for the classification of local climate zones (LCZs). Feature stacking is the most commonly-used method of data fusion but does not consider the heterogeneity of multimodal optical images and OSM data, which becomes its main drawback. The proposed framework processes two data sources separately and then combines them at the model level through two fusion models (the landuse fusion model and building fusion model), which aim to fuse optical images with landuse and buildings layers of OSM data, respectively. In addition, a new approach to detecting building incompleteness of OSM data is proposed. The proposed framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion Contest, and further validated on one additional test set containing test samples which are manually labeled in Munich and New York. Experimental results have indicated that compared to the feature stacking-based baseline framework the proposed framework is effective in fusing optical images with OSM data for the classification of LCZs with high generalization capability on a large scale. The classification accuracy of the proposed framework outperforms the baseline framework by more than 6% and 2%, while testing on the test set of 2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively. In addition, the proposed framework is less sensitive to spectral diversities of optical satellite images and thus achieves more stable classification performance than state-of-the art frameworks.Comment: accepted by TGR

    Downscaling landsat land surface temperature over the urban area of Florence

    Get PDF
    A new downscaling algorithm for land surface temperature (LST) images retrieved from Landsat Thematic Mapper (TM) was developed over the city of Florence and the results assessed against a high-resolution aerial image. The Landsat TM thermal band has a spatial resolution of 120 m, resampled at 30 m by the US Geological Survey (USGS) agency, whilst the airborne ground spatial resolution was 1 m. Substantial differences between Landsat USGS and airborne thermal data were observed on a 30 m grid: therefore a new statistical downscaling method at 30 m was developed. The overall root mean square error with respect to aircraft data improved from 3.3 °C (USGS) to 3.0 °C with the new method, that also showed better results with respect to other regressive downscaling techniques frequently used in literature. Such improvements can be ascribed to the selection of independent variables capable of representing the heterogeneous urban landscape

    Using high resolution optical imagery to detect earthquake-induced liquefaction: the 2011 Christchurch earthquake

    Get PDF
    Using automated supervised methods with satellite and aerial imageries for liquefaction mapping is a promising step in providing detailed and region-scale maps of liquefaction extent immediately after an earthquake. The accuracy of these methods depends on the quantity and quality of training samples and the number of available spectral bands. Digitizing a large number of high-quality training samples from an event may not be feasible in the desired timeframe for rapid response as the training pixels for each class should be typical and accurately represent the spectral diversity of that specific class. To perform automated classification for liquefaction detection, we need to understand how to build the optimal and accurate training dataset. Using multispectral optical imagery from the 22 February, 2011 Christchurch earthquake, we investigate the effects of quantity of high-quality training pixel samples as well as the number of spectral bands on the performance of a pixel-based parametric supervised maximum likelihood classifier for liquefaction detection. We find that the liquefaction surface effects are bimodal in terms of spectral signature and therefore, should be classified as either wet liquefaction or dry liquefaction. This is due to the difference in water content between these two modes. Using 5-fold cross-validation method, we evaluate performance of the classifier on datasets with different pixel sizes of 50, 100, 500, 2000, and 4000. Also, the effect of adding spectral information was investigated by adding once only the near infrared (NIR) band to the visible red, green, and blue (RGB) bands and the other time using all available 8 spectral bands of the World-View 2 satellite imagery. We find that the classifier has high accuracies (75%–95%) when using the 2000 pixels-size dataset that includes the RGB+NIR spectral bands and therefore, increasing to 4000 pixels-size dataset and/or eight spectral bands may not be worth the required time and cost. We also investigate accuracies of the classifier when using aerial imagery with same number of training pixels and either RGB or RGB+NIR bands and find that the classifier accuracies are higher when using satellite imagery with same number of training pixels and spectral information. The classifier identifies dry liquefaction with higher user accuracy than wet liquefaction across all evaluated scenarios. To improve classification performance for wet liquefaction detection, we also investigate adding geospatial information of building footprints to improve classification performance. We find that using a building footprint mask to remove them from the classification process, increases wet liquefaction user accuracy by roughly 10%.Published versio

    Evaluation of Skylab EREP data for land resource management

    Get PDF
    There are no author-identified significant results in this report

    Mapping Chestnut Stands Using Bi-Temporal VHR Data

    Get PDF
    This study analyzes the potential of very high resolution (VHR) remote sensing images and extended morphological profiles for mapping Chestnut stands on Tenerife Island (Canary Islands, Spain). Regarding their relevance for ecosystem services in the region (cultural and provisioning services) the public sector demand up-to-date information on chestnut and a simple straight-forward approach is presented in this study. We used two VHR WorldView images (March and May 2015) to cover different phenological phases. Moreover, we included spatial information in the classification process by extended morphological profiles (EMPs). Random forest is used for the classification process and we analyzed the impact of the bi-temporal information as well as of the spatial information on the classification accuracies. The detailed accuracy assessment clearly reveals the benefit of bi-temporal VHR WorldView images and spatial information, derived by EMPs, in terms of the mapping accuracy. The bi-temporal classification outperforms or at least performs equally well when compared to the classification accuracies achieved by the mono-temporal data. The inclusion of spatial information by EMPs further increases the classification accuracy by 5% and reduces the quantity and allocation disagreements on the final map. Overall the new proposed classification strategy proves useful for mapping chestnut stands in a heterogeneous and complex landscape, such as the municipality of La Orotava, Tenerife

    Topology, homogeneity and scale factors for object detection: application of eCognition software for urban mapping using multispectral satellite image

    Full text link
    The research scope of this paper is to apply spatial object based image analysis (OBIA) method for processing panchromatic multispectral image covering study area of Brussels for urban mapping. The aim is to map different land cover types and more specifically, built-up areas from the very high resolution (VHR) satellite image using OBIA approach. A case study covers urban landscapes in the eastern areas of the city of Brussels, Belgium. Technically, this research was performed in eCognition raster processing software demonstrating excellent results of image segmentation and classification. The tools embedded in eCognition enabled to perform image segmentation and objects classification processes in a semi-automated regime, which is useful for the city planning, spatial analysis and urban growth analysis. The combination of the OBIA method together with technical tools of the eCognition demonstrated applicability of this method for urban mapping in densely populated areas, e.g. in megapolis and capital cities. The methodology included multiresolution segmentation and classification of the created objects.Comment: 6 pages, 12 figures, INSO2015, Ed. by A. Girgvliani et al. Akaki Tsereteli State University, Kutaisi (Imereti), Georgi
    • …
    corecore