7 research outputs found

    Image Segmentation in a Remote Sensing Perspective

    Get PDF
    Image segmentation is generally defined as the process of partitioning an image into suitable groups of pixels such that each region is homogeneous but the union of two adjacent regions is not, according to a homogeneity criterion that is application specific. In most automatic image processing tasks, efficient image segmentation is one of the most critical steps and, in general, no unique solution can be provided for all possible applications. My thesis is mainly focused on Remote Sensing (RS) images, a domain in which a growing attention has been devoted to image segmentation in the last decades, as a fundamental step for various application such as land cover/land use classification and change detection. In particular, several different aspects have been addressed, which span from the design of novel low-level image segmentation techniques to the de?nition of new application scenarios leveraging Object-based Image Analysis (OBIA). More specifically, this summary will cover the three main activities carried out during my PhD: first, the development of two segmentation techniques for object layer extraction from multi/hyper-spectral and multi-resolution images is presented, based on respectively morphological image analysis and graph clustering. Finally, a new paradigm for the interactive segmentation of Synthetic Aperture Radar (SAR) multi-temporal series is introduced

    Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas

    Get PDF
    In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hypersharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing Images

    Get PDF
    Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image. In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications. Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral). Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images. Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques. Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated. Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image. Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods

    Remote sensing methods for biodiversity monitoring with emphasis on vegetation height estimation and habitat classification

    Get PDF
    Biodiversity is a principal factor for ecosystem stability and functioning, and the need for its protection has been identified as imperative globally. Remote sensing can contribute to timely and accurate monitoring of various elements related to biodiversity, but knowledge gap with user communities hinders its widespread operational use. This study advances biodiversity monitoring through earth observation data by initially identifying, reviewing, and proposing state-of-the-art remote sensing methods which can be used for the extraction of a number of widely adopted indicators of global biodiversity assessment. Then, a cost and resource effective approach is proposed for vegetation height estimation, using satellite imagery from very high resolution passive sensors. A number of texture features are extracted, based on local variance, entropy, and local binary patterns, and processed through several data processing, dimensionality reduction, and classification techniques. The approach manages to discriminate six vegetation height categories, useful for ecological studies, with accuracies over 90%. Thus, it offers an effective approach for landscape analysis, and habitat and land use monitoring, extending previous approaches as far as the range of height and vegetation species, synergies of multi-date imagery, data processing, and resource economy are regarded. Finally, two approaches are introduced to advance the state of the art in habitat classification using remote sensing data and pre-existing land cover information. The first proposes a methodology to express land cover information as numerical features and a supervised classification framework, automating the previous labour- and time-consuming rule-based approach used as reference. The second advances the state of the art incorporating Dempster–Shafer evidential theory and fuzzy sets, and proves successful in handling uncertainties from missing data or vague rules and offering wide user defined parameterization potential. Both approaches outperform the reference study in classification accuracy, proving promising for biodiversity monitoring, ecosystem preservation, and sustainability management tasks.Open Acces
    corecore