348 research outputs found

    Fusion of Hyperspectral and LiDAR Data Using Sparse and Low-Rank Component Analysis

    Get PDF
    The availability of diverse data captured over the same region makes it possible to develop multisensor data fusion techniques to further improve the discrimination ability of classifiers. In this paper, a new sparse and low-rank technique is proposed for the fusion of hyperspectral and light detection and ranging (LiDAR)-derived features. The proposed fusion technique consists of two main steps. First, extinction profiles are used to extract spatial and elevation information from hyperspectral and LiDAR data, respectively. Then, the sparse and low-rank technique is utilized to estimate the low-rank fused features from the extracted ones that are eventually used to produce a final classification map. The proposed approach is evaluated over an urban data set captured over Houston, USA, and a rural one captured over Trento, Italy. Experimental results confirm that the proposed fusion technique outperforms the other techniques used in the experiments based on the classification accuracies obtained by random forest and support vector machine classifiers. Moreover, the proposed approach can effectively classify joint LiDAR and hyperspectral data in an ill-posed situation when only a limited number of training samples are available

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Classification of Hyperspectral and LiDAR Data Using Coupled CNNs

    Get PDF
    In this paper, we propose an efficient and effective framework to fuse hyperspectral and Light Detection And Ranging (LiDAR) data using two coupled convolutional neural networks (CNNs). One CNN is designed to learn spectral-spatial features from hyperspectral data, and the other one is used to capture the elevation information from LiDAR data. Both of them consist of three convolutional layers, and the last two convolutional layers are coupled together via a parameter sharing strategy. In the fusion phase, feature-level and decision-level fusion methods are simultaneously used to integrate these heterogeneous features sufficiently. For the feature-level fusion, three different fusion strategies are evaluated, including the concatenation strategy, the maximization strategy, and the summation strategy. For the decision-level fusion, a weighted summation strategy is adopted, where the weights are determined by the classification accuracy of each output. The proposed model is evaluated on an urban data set acquired over Houston, USA, and a rural one captured over Trento, Italy. On the Houston data, our model can achieve a new record overall accuracy of 96.03%. On the Trento data, it achieves an overall accuracy of 99.12%. These results sufficiently certify the effectiveness of our proposed model

    More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery Classification

    Full text link
    Classification and identification of the materials lying over or beneath the Earth's surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent advancements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on "what", "where", and "how" to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    Deep learning for fusion of APEX hyperspectral and full-waveform LiDAR remote sensing data for tree species mapping

    Get PDF
    Deep learning has been widely used to fuse multi-sensor data for classification. However, current deep learning architecture for multi-sensor data fusion might not always perform better than single data source, especially for the fusion of hyperspectral and light detection and ranging (LiDAR) remote sensing data for tree species mapping in complex, closed forest canopies. In this paper, we propose a new deep fusion framework to integrate the complementary information from hyperspectral and LiDAR data for tree species mapping. We also investigate the fusion of either “single-band” or multi-band (i.e., full-waveform) LiDAR with hyperspectral data for tree species mapping. Additionally, we provide a solution to estimate the crown size of tree species by the fusion of multi-sensor data. Experimental results on fusing real APEX hyperspectral and LiDAR data demonstrate the effectiveness of the proposed deep fusion framework. Compared to using only single data source or current deep fusion architecture, our proposed method yields improvements in overall and average classification accuracies ranging from 82.21% to 87.10% and 76.71% to 83.45%, respectively

    Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art

    Get PDF
    The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several
    • …
    corecore