14 research outputs found

    Explaining Multimodal Data Fusion: Occlusion Analysis for Wilderness Mapping

    Full text link
    Jointly harnessing complementary features of multi-modal input data in a common latent space has been found to be beneficial long ago. However, the influence of each modality on the models decision remains a puzzle. This study proposes a deep learning framework for the modality-level interpretation of multimodal earth observation data in an end-to-end fashion. While leveraging an explainable machine learning method, namely Occlusion Sensitivity, the proposed framework investigates the influence of modalities under an early-fusion scenario in which the modalities are fused before the learning process. We show that the task of wilderness mapping largely benefits from auxiliary data such as land cover and night time light data.Comment: 5 pages, 2 figure

    Land use and land cover mapping using deep learning based segmentation approaches and VHR Worldview-3 images

    Get PDF
    Deep learning-based segmentation of very high-resolution (VHR) satellite images is a significant task providing valuable information for various geospatial applications, specifically for land use/land cover (LULC) mapping. The segmentation task becomes more challenging with the increasing number and complexity of LULC classes. In this research, we generated a new benchmark dataset from VHR Worldview-3 images for twelve distinct LULC classes of two different geographical locations. We evaluated the performance of different segmentation architectures and encoders to find the best design to create highly accurate LULC maps. Our results showed that the DeepLabv3+ architecture with an ResNeXt50 encoder achieved the best performance for different metric values with an IoU of 89.46%, an F-1 score of 94.35%, a precision of 94.25%, and a recall of 94.49%. This design could be used by other researchers for LULC mapping of similar classes from different satellite images or for different geographical regions. Moreover, our benchmark dataset can be used as a reference for implementing new segmentation models via supervised, semi- or weakly-supervised deep learning models. In addition, our model results can be used for transfer learning and generalizability of different methodologies

    Deep neural network ensembles for remote sensing land cover and land use classification

    No full text
    With the advancement of satellite technology, a considerable amount of very high-resolution imagery has become available to be used for the Land Cover and Land Use (LCLU) classification task aiming to categorize remotely sensed images based on their semantic content. Recently, Deep Neural Networks (DNNs) have been widely used for different applications in the field of remote sensing and they have profound impacts; however, improvement of the generalizability and robustness of the DNNs needs to be progressed further to achieve higher accuracy for a variety of sensing geometries and categories. We address this problem by deploying three different Deep Neural Network Ensemble (DNNE) methods and creating a comparative analysis for the LCLU classification task. DNNE enables improvement of the performance of DNNs by ensuring the diversity of the models that are combined. Thus, enhances the generalizability of the models and produces more robust and generalizable outcomes for LCLU classification tasks. The experimental results on NWPU-RESISC45 and AID datasets demonstrate that utilizing the aggregated information from multiple DNNs leads to an increase in classification performance, achieves state-of-the-art, and promotes researchers to make use of DNNE
    corecore