578 research outputs found

    An intelligent classification system for land use and land cover mapping using spaceborne remote sensing and GIS

    Get PDF
    The objectives of this study were to experiment with and extend current methods of Synthetic Aperture Rader (SAR) image classification, and to design and implement a prototype intelligent remote sensing image processing and classification system for land use and land cover mapping in wet season conditions in Bangladesh, which incorporates SAR images and other geodata. To meet these objectives, the problem of classifying the spaceborne SAR images, and integrating Geographic Information System (GIS) data and ground truth data was studied first. In this phase of the study, an extension to traditional techniques was made by applying a Self-Organizing feature Map (SOM) to include GIS data with the remote sensing data during image segmentation. The experimental results were compared with those of traditional statistical classifiers, such as Maximum Likelihood, Mahalanobis Distance, and Minimum Distance classifiers. The performances of the classifiers were evaluated in terms of the classification accuracy with respect to the collected real-time ground truth data. The SOM neural network provided the highest overall accuracy when a GIS layer of land type classification (with respect to the period of inundation by regular flooding) was used in the network. Using this method, the overall accuracy was around 15% higher than the previously mentioned traditional classifiers. It also achieved higher accuracies for more classes in comparison to the other classifiers. However, it was also observed that different classifiers produced better accuracy for different classes. Therefore, the investigation was extended to consider Multiple Classifier Combination (MCC) techniques, which is a recently emerging research area in pattern recognition. The study has tested some of these techniques to improve the classification accuracy by harnessing the goodness of the constituent classifiers. A Rule-based Contention Resolution method of combination was developed, which exhibited an improvement in the overall accuracy of about 2% in comparison to its best constituent (SOM) classifier. The next phase of the study involved the design of an architecture for an intelligent image processing and classification system (named ISRIPaC) that could integrate the extended methodologies mentioned above. Finally, the architecture was implemented in a prototype and its viability was evaluated using a set of real data. The originality of the ISRIPaC architecture lies in the realisation of the concept of a complete system that can intelligently cover all the steps of image processing classification and utilise standardised metadata in addition to a knowledge base in determining the appropriate methods and course of action for the given task. The implemented prototype of the ISRIPaC architecture is a federated system that integrates the CLIPS expert system shell, the IDRISI Kilimanjaro image processing and GIS software, and the domain experts' knowledge via a control agent written in Visual C++. It starts with data assessment and pre-processing and ends up with image classification and accuracy assessment. The system is designed to run automatically, where the user merely provides the initial information regarding the intended task and the source of available data. The system itself acquires necessary information about the data from metadata files in order to make decisions and perform tasks. The test and evaluation of the prototype demonstrates the viability of the proposed architecture and the possibility of extending the system to perform other image processing tasks and to use different sources of data. The system design presented in this study thus suggests some directions for the development of the next generation of remote sensing image processing and classification systems

    Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets, Applications and Challenges

    Full text link
    The deep learning, which is a dominating technique in artificial intelligence, has completely changed the image understanding over the past decade. As a consequence, the sea ice extraction (SIE) problem has reached a new era. We present a comprehensive review of four important aspects of SIE, including algorithms, datasets, applications, and the future trends. Our review focuses on researches published from 2016 to the present, with a specific focus on deep learning-based approaches in the last five years. We divided all relegated algorithms into 3 categories, including classical image segmentation approach, machine learning-based approach and deep learning-based methods. We reviewed the accessible ice datasets including SAR-based datasets, the optical-based datasets and others. The applications are presented in 4 aspects including climate research, navigation, geographic information systems (GIS) production and others. It also provides insightful observations and inspiring future research directions.Comment: 24 pages, 6 figure

    Probabilistic inference method to discriminate closed water from sea ice using SENTINEL-1 Sar signatures

    Get PDF
    Consistent sea ice monitoring requires accurate estimates of sea ice concentration. Current retrieval algorithms are based on low-resolution microwave radiometry data with limited penetration depth and are unable to resolve surface characteristics of sea ice in sufficient detail which is necessary to discriminate intact sea ice from closed water. Important information about surface roughness conditions are contained in the distribution of radar backscattering images which can be - in principle - used to detect melt ponds and different sea ice types. In this work, a two-step probabilistic approach based on Expectation-Maximization and Bayesian inference considers the spatial and statistical characteristics of medium-resolution daily-available Sentinel-1 SAR images. The presented method segments sea ice into a number of separable classes and enables to discriminate surface water from the remaining sea ice types.The lead author was supported by “la Caixa” Foundation (ID 100010434) with the fellowship code LCF/BQ/D118/11660050, and received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 713673. The project was also funded through the award “Unidad de Excelencia María de Maeztu” MDM-2016-0600, by the Spanish Ministry of Science and Innovation through the project “L-band” ESP2017-89463-C3-2-R, and the project “Sensing with Pioneering Opportunistic Techniques (SPOT)” RTI2018-099008-B-C21/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Mapping urban forest extent and modeling sequestered carbon across Chattanooga, TN using GIS and remote sensing

    Get PDF
    Chattanooga, Tennessee is among many cities experiencing rapid urbanization and subsequent losses to urban forest area. Using remote sensing and digital image processing, this research 1) applied supervised hybrid classification across Landsat imagery that quantified the extent of urban forest loss across Chattanooga between 1984 and 2021, 2) modeled the carbon sequestered in the biomass of Chattanooga’s urban trees using field data and vegetation indices, and finally 3) developed the first city-wide high-resolution land cover map across Chattanooga using SkySat imagery and object-based classification. Results found that Chattanooga has lost up to 43% of its urban tree canopy and gained up to 134% of urban land area. Additionally, a methodology for modeling sequestered carbon across urban forests was identified. Finally, using high-resolution imagery and the object-based workflow as described here, it is capable of producing accurate maps of urban tree canopy distribution with overall accuracy quantified in excess of 93%

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    Flood mapping in vegetated areas using an unsupervised clustering approach on Sentinel-1 and-2 imagery

    Get PDF
    The European Space Agency's Sentinel-1 constellation provides timely and freely available dual-polarized C-band Synthetic Aperture Radar (SAR) imagery. The launch of these and other SAR sensors has boosted the field of SAR-based flood mapping. However, flood mapping in vegetated areas remains a topic under investigation, as backscatter is the result of a complex mixture of backscattering mechanisms and strongly depends on the wave and vegetation characteristics. In this paper, we present an unsupervised object-based clustering framework capable of mapping flooding in the presence and absence of flooded vegetation based on freely and globally available data only. Based on a SAR image pair, the region of interest is segmented into objects, which are converted to a SAR-optical feature space and clustered using K-means. These clusters are then classified based on automatically determined thresholds, and the resulting classification is refined by means of several region growing post-processing steps. The final outcome discriminates between dry land, permanent water, open flooding, and flooded vegetation. Forested areas, which might hide flooding, are indicated as well. The framework is presented based on four case studies, of which two contain flooded vegetation. For the optimal parameter combination, three-class F1 scores between 0.76 and 0.91 are obtained depending on the case, and the pixel- and object-based thresholding benchmarks are outperformed. Furthermore, this framework allows an easy integration of additional data sources when these become available

    Multisource Data Integration in Remote Sensing

    Get PDF
    Papers presented at the workshop on Multisource Data Integration in Remote Sensing are compiled. The full text of these papers is included. New instruments and new sensors are discussed that can provide us with a large variety of new views of the real world. This huge amount of data has to be combined and integrated in a (computer-) model of this world. Multiple sources may give complimentary views of the world - consistent observations from different (and independent) data sources support each other and increase their credibility, while contradictions may be caused by noise, errors during processing, or misinterpretations, and can be identified as such. As a consequence, integration results are very reliable and represent a valid source of information for any geographical information system

    Change detection of isolated housing using a new hybrid approach based on object classification with optical and TerraSAR-X data

    Full text link
    Optical and microwave high spatial resolution images are now available for a wide range of applications. In this work, they have been applied for the semi-automatic change detection of isolated housing in agricultural areas. This article presents a new hybrid methodology based on segmentation of high-resolution images and image differencing. This new approach mixes the main techniques used in change detection methods and it also adds a final segmentation process in order to classify the change detection product. First, isolated building classification is carried out using only optical data. Then, synthetic aperture radar (SAR) information is added to the classification process, obtaining excellent results with lower complexity cost. Since the first classification step is improved, the total change detection scheme is also enhanced when the radar data are used for classification. Finally, a comparison between the different methods is presented and some conclusions are extracted from the study. © 2011 Taylor & Francis.Vidal Pantaleoni, A.; Moreno Cambroreno, MDR. (2011). Change detection of isolated housing using a new hybrid approach based on object classification with optical and TerraSAR-X data. International Journal of Remote Sensing. 32(24):9621-9635. doi:10.1080/01431161.2011.571297S962196353224BLAES, X., VANHALLE, L., & DEFOURNY, P. (2005). Efficiency of crop identification based on optical and SAR image time series. Remote Sensing of Environment, 96(3-4), 352-365. doi:10.1016/j.rse.2005.03.010Chen, Y., Shi, P., Fung, T., Wang, J., & Li, X. (2007). Object‐oriented classification for urban land cover mapping with ASTER imagery. International Journal of Remote Sensing, 28(20), 4645-4651. doi:10.1080/01431160500444731Dalla Mura, M., Benediktsson, J. A., Bovolo, F., & Bruzzone, L. (2008). An Unsupervised Technique Based on Morphological Filters for Change Detection in Very High Resolution Images. IEEE Geoscience and Remote Sensing Letters, 5(3), 433-437. doi:10.1109/lgrs.2008.917726Dell’Acqua, F., & Gamba, P. (2006). Discriminating urban environments using multiscale texture and multiple SAR images. International Journal of Remote Sensing, 27(18), 3797-3812. doi:10.1080/01431160600557572Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6), 610-621. doi:10.1109/tsmc.1973.4309314Im, J., Jensen, J. R., & Tullis, J. A. (2008). Object‐based change detection using correlation image analysis and image segmentation. International Journal of Remote Sensing, 29(2), 399-423. doi:10.1080/01431160601075582Lhomme, S., He, D., Weber, C., & Morin, D. (2009). A new approach to building identification from very‐high‐spatial‐resolution images. International Journal of Remote Sensing, 30(5), 1341-1354. doi:10.1080/01431160802509017LOBO, A., CHIC, O., & CASTERAD, A. (1996). Classification of Mediterranean crops with multisensor data: per-pixel versus per-object statistics and image segmentation. International Journal of Remote Sensing, 17(12), 2385-2400. doi:10.1080/01431169608948779Lu, D., Mausel, P., Brondízio, E., & Moran, E. (2004). Change detection techniques. International Journal of Remote Sensing, 25(12), 2365-2401. doi:10.1080/0143116031000139863Shimabukuro, Y. E., Almeida‐Filho, R., Kuplich, T. M., & de Freitas, R. M. (2007). Quantifying optical and SAR image relationships for tropical landscape features in the Amazônia. International Journal of Remote Sensing, 28(17), 3831-3840. doi:10.1080/01431160701236829Stramondo, S., Bignami, C., Chini, M., Pierdicca, N., & Tertulliani, A. (2006). Satellite radar and optical remote sensing for earthquake damage detection: results from different case studies. International Journal of Remote Sensing, 27(20), 4433-4447. doi:10.1080/01431160600675895Yuan, D., & Elvidge, C. D. (1996). Comparison of relative radiometric normalization techniques. ISPRS Journal of Photogrammetry and Remote Sensing, 51(3), 117-126. doi:10.1016/0924-2716(96)00018-
    corecore