4,424 research outputs found

    The contribution of multitemporal information from multispectral satellite images for automatic land cover classification at the national scale

    Get PDF
    Thesis submitted to the Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Information Management – Geographic Information SystemsImaging and sensing technologies are constantly evolving so that, now, the latest generations of satellites commonly provide with Earth’s surface snapshots at very short sampling periods (i.e. daily images). It is unquestionable that this tendency towards continuous time observation will broaden up the scope of remotely sensed activities. Inevitable also, such increasing amount of information will prompt methodological approaches that combine digital image processing techniques with time series analysis for the characterization of land cover distribution and monitoring of its dynamics on a frequent basis. Nonetheless, quantitative analyses that convey the proficiency of three-dimensional satellite images data sets (i.e. spatial, spectral and temporal) for the automatic mapping of land cover and land cover time evolution have not been thoroughly explored. In this dissertation, we investigate the usefulness of multispectral time series sets of medium spatial resolution satellite images for the regular land cover characterization at the national scale. This study is carried out on the territory of Continental Portugal and exploits satellite images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) and MEdium Resolution Imaging Spectrometer (MERIS). In detail, we first focus on the analysis of the contribution of multitemporal information from multispectral satellite images for the automatic land cover classes’ discrimination. The outcomes show that multispectral information contributes more significantly than multitemporal information for the automatic classification of land cover types. In the sequence, we review some of the most important steps that constitute a standard protocol for the automatic land cover mapping from satellite images. Moreover, we delineate a methodological approach for the production and assessment of land cover maps from multitemporal satellite images that guides us in the production of a land cover map with high thematic accuracy for the study area. Finally, we develop a nonlinear harmonic model for fitting multispectral reflectances and vegetation indices time series from satellite images for numerous land cover classes. The simplified multitemporal information retrieved with the model proves adequate to describe the main land cover classes’ characteristics and to predict the time evolution of land cover classes’individuals

    Robust Normalized Softmax Loss for Deep Metric Learning-Based Characterization of Remote Sensing Images With Label Noise

    Get PDF
    Most deep metric learning-based image characterization methods exploit supervised information to model the semantic relations among the remote sensing (RS) scenes. Nonetheless, the unprecedented availability of large-scale RS data makes the annotation of such images very challenging, requiring automated supportive processes. Whether the annotation is assisted by aggregation or crowd-sourcing, the RS large-variance problem, together with other important factors [e.g., geo-location/registration errors, land-cover changes, even low-quality Volunteered Geographic Information (VGI), etc.] often introduce the so-called label noise, i.e., semantic annotation errors. In this article, we first investigate the deep metric learning-based characterization of RS images with label noise and propose a novel loss formulation, named robust normalized softmax loss (RNSL), for robustly learning the metrics among RS scenes. Specifically, our RNSL improves the robustness of the normalized softmax loss (NSL), commonly utilized for deep metric learning, by replacing its logarithmic function with the negative Box–Cox transformation in order to down-weight the contributions from noisy images on the learning of the corresponding class prototypes. Moreover, by truncating the loss with a certain threshold, we also propose a truncated robust normalized softmax loss (t-RNSL) which can further enforce the learning of class prototypes based on the image features with high similarities between them, so that the intraclass features can be well grouped and interclass features can be well separated. Our experiments, conducted on two benchmark RS data sets, validate the effectiveness of the proposed approach with respect to different state-of-the-art methods in three different downstream applications (classification, clustering, and retrieval). The codes of this article will be publicly available from https://github.com/jiankang1991

    M3Fusion: A Deep Learning Architecture for Multi-{Scale/Modal/Temporal} satellite data fusion

    Get PDF
    Modern Earth Observation systems provide sensing data at different temporal and spatial resolutions. Among optical sensors, today the Sentinel-2 program supplies high-resolution temporal (every 5 days) and high spatial resolution (10m) images that can be useful to monitor land cover dynamics. On the other hand, Very High Spatial Resolution images (VHSR) are still an essential tool to figure out land cover mapping characterized by fine spatial patterns. Understand how to efficiently leverage these complementary sources of information together to deal with land cover mapping is still challenging. With the aim to tackle land cover mapping through the fusion of multi-temporal High Spatial Resolution and Very High Spatial Resolution satellite images, we propose an End-to-End Deep Learning framework, named M3Fusion, able to leverage simultaneously the temporal knowledge contained in time series data as well as the fine spatial information available in VHSR information. Experiments carried out on the Reunion Island study area asses the quality of our proposal considering both quantitative and qualitative aspects

    Dense semantic labeling of sub-decimeter resolution images with convolutional neural networks

    Full text link
    Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations. Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction. In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time. We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time.Comment: Accepted in IEEE Transactions on Geoscience and Remote Sensing, 201

    Deep Learning Approaches for Seagrass Detection in Multispectral Imagery

    Get PDF
    Seagrass forms the basis for critically important marine ecosystems. Seagrass is an important factor to balance marine ecological systems, and it is of great interest to monitor its distribution in different parts of the world. Remote sensing imagery is considered as an effective data modality based on which seagrass monitoring and quantification can be performed remotely. Traditionally, researchers utilized multispectral satellite images to map seagrass manually. Automatic machine learning techniques, especially deep learning algorithms, recently achieved state-of-the-art performances in many computer vision applications. This dissertation presents a set of deep learning models for seagrass detection in multispectral satellite images. It also introduces novel domain adaptation approaches to adapt the models for new locations and for temporal image series. In Chapter 3, I compare a deep capsule network (DCN) with a deep convolutional neural network (DCNN) for seagrass detection in high-resolution multispectral satellite images. These methods are tested on three satellite images in Florida coastal areas and obtain comparable performances. In addition, I also propose a few-shot deep learning strategy to transfer knowledge learned by DCN from one location to the others for seagrass detection. In Chapter 4, I develop a semi-supervised domain adaptation method to generalize a trained DCNN model to multiple locations for seagrass detection. First, the model utilizes a generative adversarial network (GAN) to align marginal distribution of data in the source domain to that in the target domain using unlabeled data from both domains. Second, it uses a few labeled samples from the target domain to align class-specific data distributions between the two. The model achieves the best results in 28 out of 36 scenarios as compared to other state-of-the-art domain adaptation methods. In Chapter 5, I develop a semantic segmentation method for seagrass detection in multispectral time-series images. First, I train a state-of-the-art image segmentation method using an active learning approach where I use the DCNN classifier in the loop. Then, I develop an unsupervised domain adaptation (UDA) algorithm to detect seagrass across temporal images. I also extend our unsupervised domain adaptation work for seagrass detection across locations. In Chapter 6, I present an automated bathymetry estimation model based on multispectral satellite images. Bathymetry refers to the depth of the ocean floor and contributes a predominant role in identifying marine species in seawater. Accurate bathymetry information of coastal areas will facilitate seagrass detection by reducing false positives because seagrass usually do not grow beyond a certain depth. However, bathymetry information of most parts of the world is obsolete or missing. Traditional bathymetry measurement systems require extensive labor efforts. I utilize an ensemble machine learning-based approach to estimate bathymetry based on a few in-situ sonar measurements and evaluate the proposed model in three coastal locations in Florida

    Automatic methods for crop classification by merging satellite radar (sentinel 1) and optical (sentinel 2) . data and artificial intelligence analysis

    Get PDF
    Land use and land cover maps can support our understanding of coupled human- environment systems and provide important information for environmental modelling and water resource management. Satellite data are a valuable source for land use and land cover mapping. However, cloud-free or weather independent data are necessary to map cloud-prone regions. Merging radar with optical images would increase the accuracy of the study. Agricultural land cover is characterized by strong variations within relatively short time intervals. These dynamics are challenging for land cover classifications on the one hand, but deliver crucial information that can be used to improve the machine learning classifier’s performance on the other hand. A parcel-based map of the main crop classes of the Netherlands was produced implementing a script on GEE and using Copernicus data. The machine-learning model used is a Random Forest Classifier. This was done by combining time series of radar and multispectral images from Sentinel 1 and Sentinel 2 satellites, respectively. The results show the potential of providing useful information delivered by entirely open source data and uses a cloud computing-based approach. The algorithm combines the two satellites data of one year in a multibands image to feed in the classifier. Standard deviation and several vegetation indexes were added in order to have more variables for each 15-day-median image composite. The process paid particular attention to time variability of mean values of each field. This will provide useful information both for understanding differences among crops and variability over the phenology of the plant. The accuracy assessment demonstrates that several crop types (i.e. corn, tulip) can be better classified with both radar and optical images while others (i.e. sugar beet, barley) have an increased accuracy with only radar. The overall accuracy of RFC with optical and radar is 76% while it is 74% if only radar is used

    Continental-scale land cover mapping at 10 m resolution over Europe (ELC10)

    Get PDF
    Widely used European land cover maps such as CORINE are produced at medium spatial resolutions (100 m) and rely on diverse data with complex workflows requiring significant institutional capacity. We present a high resolution (10 m) land cover map (ELC10) of Europe based on a satellite-driven machine learning workflow that is annually updatable. A Random Forest classification model was trained on 70K ground-truth points from the LUCAS (Land Use/Cover Area frame Survey) dataset. Within the Google Earth Engine cloud computing environment, the ELC10 map can be generated from approx. 700 TB of Sentinel imagery within approx. 4 days from a single research user account. The map achieved an overall accuracy of 90% across 8 land cover classes and could account for statistical unit land cover proportions within 3.9% (R2 = 0.83) of the actual value. These accuracies are higher than that of CORINE (100 m) and other 10-m land cover maps including S2GLC and FROM-GLC10. We found that atmospheric correction of Sentinel-2 and speckle filtering of Sentinel-1 imagery had minimal effect on enhancing classification accuracy (< 1%). However, combining optical and radar imagery increased accuracy by 3% compared to Sentinel-2 alone and by 10% compared to Sentinel-1 alone. The conversion of LUCAS points into homogenous polygons under the Copernicus module increased accuracy by <1%, revealing that Random Forests are robust against contaminated training data. Furthermore, the model requires very little training data to achieve moderate accuracies - the difference between 5K and 50K LUCAS points is only 3% (86 vs 89%). At 10-m resolution, the ELC10 map can distinguish detailed landscape features like hedgerows and gardens, and therefore holds potential for aerial statistics at the city borough level and monitoring property-level environmental interventions (e.g. tree planting)
    • …
    corecore