321 research outputs found

    Monitoring soil erosion in the Souss basin, Morocco, with a multiscale object-based remote sensing approach using UAV and satellite data

    Get PDF
    This article presents a multiscale approach for detecting and monitoring soil erosion phenomena (i.e. gully erosion) in the agro-industrial area around the city of Taroudannt, Souss basin, Morocco. The study area is characterized as semi-arid with an annual average precipitation of 200 mm. Water scarcity, high population dynamics and changing land use towards huge areas of irrigation farming present numerous threats to sustainability. The agro-industry produces citrus fruits and vegetables in monocropping, mainly for the European market. Badland areas strongly affected by gully erosion border the agricultural areas as well as residential areas. To counteract the significant loss of land, land-leveling measures are attempted to create space for plantations and greenhouses. In order to develop sustainable approaches to limit gully growth the detection and monitoring of gully systems is fundamental. Specific gully sites are monitored with unmanned aerial vehicle (UAV) taking small-format aerial photographs (SFAP). This enables extremely high-resolution analysis (SFAP resolution: 2-10 cm) of the actual size of the gully channels as well as a detailed continued surveillance of their growth. Transferring the methodology on a larger scale using Quickbird satellite data (resolution: 60 cm) leads to the possibility of a large-scale analysis of the whole area around the city of Taroudannt (Area extent: ca. 350 km²). The results will then reveal possible relationships of gully growth and agro-industrial management and may even illustrate further interdependencies. The main objective is the identification of areas with high gully-erosion risk due to non-sustainable land use and the development of mitigation strategies for the study area

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    A novel feature fusion approach for VHR remote sensing image classification

    Get PDF
    6openInternationalInternational coauthor/editorThis article develops a robust feature fusion approach to enhance the classification performance of very high resolution (VHR) remote sensing images. Specifically, a novel two-stage multiple feature fusion (TsF) approach is proposed, which includes an intragroup and an intergroup feature fusion stages. In the first fusion stage, multiple features are grouped by clustering, where redundant information between different types of features is eliminated within each group. Then, features are pairwisely fused in an intergroup fusion model based on the guided filtering method. Finally, the fused feature set is imported into a classifier to generate the classification map. In this work, the original VHR spectral bands and their attribute profiles are taken as examples as input spectral and spatial features, respectively, in order to test the performance of the proposed TsF approach. Experimental results obtained on two QuickBird datasets covering complex urban scenarios demonstrate the effectiveness of the proposed approach in terms of generation of more discriminative fusion features and enhancing classification performance. More importantly, the fused feature dimensionality is limited at a certain level; thus, the computational cost will not be significantly increased even if multiple features are considered.openLiu, S.; Zheng, Y.; Du, Q.; Samat, A.; Tong, X.; Dalponte, M.Liu, S.; Zheng, Y.; Du, Q.; Samat, A.; Tong, X.; Dalponte, M

    High-resolution optical and SAR image fusion for building database updating

    Get PDF
    This paper addresses the issue of cartographic database (DB) creation or updating using high-resolution synthetic aperture radar and optical images. In cartographic applications, objects of interest are mainly buildings and roads. This paper proposes a processing chain to create or update building DBs. The approach is composed of two steps. First, if a DB is available, the presence of each DB object is checked in the images. Then, we verify if objects coming from an image segmentation should be included in the DB. To do those two steps, relevant features are extracted from images in the neighborhood of the considered object. The object removal/inclusion in the DB is based on a score obtained by the fusion of features in the framework of Dempster–Shafer evidence theory

    Building change detection in Multitemporal very high resolution SAR images

    Get PDF

    TransY-Net:Learning Fully Transformer Networks for Change Detection of Remote Sensing Images

    Full text link
    In the remote sensing field, Change Detection (CD) aims to identify and localize the changed regions from dual-phase images over the same places. Recently, it has achieved great progress with the advances of deep learning. However, current methods generally deliver incomplete CD regions and irregular CD boundaries due to the limited representation ability of the extracted visual features. To relieve these issues, in this work we propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD, which improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner. More specifically, the proposed framework first utilizes the advantages of Transformers in long-range dependency modeling. It can help to learn more discriminative global-level features and obtain complete CD regions. Then, we introduce a novel pyramid structure to aggregate multi-level visual features from Transformers for feature enhancement. The pyramid structure grafted with a Progressive Attention Module (PAM) can improve the feature representation ability with additional inter-dependencies through spatial and channel attentions. Finally, to better train the whole framework, we utilize the deeply-supervised learning with multiple boundary-aware loss functions. Extensive experiments demonstrate that our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks. The source code is released at https://github.com/Drchip61/TransYNet.Comment: This work is accepted by TGRS2023. It is an extension of our ACCV2022 paper and arXiv:2210.0075

    Uncertainties of Human Perception in Visual Image Interpretation in Complex Urban Environments

    Get PDF
    Today satellite images are mostly exploited automatically due to advances in image classification methods. Manual visual image interpretation (MVII), however, still plays a significant role e.g., to generate training data for machine-learning algorithms or for validation purposes. In certain urban environments, however, of e.g., highest densities and structural complexity, textural and spectral complications in overlapping roof-structures still demand the human interpreter if one aims to capture individual building structures. The cognitive perception and real-world experience are still inevitable. Against these backgrounds, this article aims at quantifying and interpreting the uncertainties of mapping rooftop footprints of such areas. We focus on the agreement among interpreters and which aspects of perception and elements of image interpretation affect mapping. Ten test persons digitized six complex built-up areas. Hereby, we receive quantitative information about spatial variables of buildings to systematically check the consistency and congruence of results. An additional questionnaire reveals qualitative information about obstacles. Generally, we find large differences among interpreters’ mapping results and a high consistency of results for the same interpreter. We measure rising deviations correlate with a rising morphologic complexity. High degrees of individuality are expressed e.g., in time consumption, insitu-or geographic information system (GIS)-precognition whereas data source mostly influences the mapping procedure. By this study, we aim to fill a gap as prior research using MVII often does not implement an uncertainty analysis or quantify mapping aberrations. We conclude that remote sensing studies should not only rely unquestioned on MVII for validation; furthermore, data and methods are needed to suspend uncertainty

    Towards a 20m global building map from Sentinel-1 SAR Data

    Get PDF
    This study introduces a technique for automatically mapping built-up areas using synthetic aperture radar (SAR) backscattering intensity and interferometric multi-temporal coherence generated from Sentinel-1 data in the framework of the Copernicus program. The underlying hypothesis is that, in SAR images, built-up areas exhibit very high backscattering values that are coherent in time. Several particular characteristics of the Sentinel-1 satellite mission are put to good use, such as its high revisit time, the availability of dual-polarized data, and its small orbital tube. The newly developed algorithm is based on an adaptive parametric thresholding that first identifies pixels with high backscattering values in both VV and VH polarimetric channels. The interferometric SAR coherence is then used to reduce false alarms. These are caused by land cover classes (other than buildings) that are characterized by high backscattering values that are not coherent in time (e.g., certain types of vegetated areas). The algorithm was tested on Sentinel-1 Interferometric Wide Swath data from five different test sites located in semiarid and arid regions in the Mediterranean region and Northern Africa. The resulting building maps were compared with the Global Urban Footprint (GUF) derived from the TerraSAR-X mission data and, on average, a 92% agreement was obtained.Peer ReviewedPostprint (published version
    corecore