3 research outputs found

    Learning transformer-based heterogeneously salient graph representation for multimodal fusion classification of hyperspectral image and LiDAR data

    Full text link
    Data collected by different modalities can provide a wealth of complementary information, such as hyperspectral image (HSI) to offer rich spectral-spatial properties, synthetic aperture radar (SAR) to provide structural information about the Earth's surface, and light detection and ranging (LiDAR) to cover altitude information about ground elevation. Therefore, a natural idea is to combine multimodal images for refined and accurate land-cover interpretation. Although many efforts have been attempted to achieve multi-source remote sensing image classification, there are still three issues as follows: 1) indiscriminate feature representation without sufficiently considering modal heterogeneity, 2) abundant features and complex computations associated with modeling long-range dependencies, and 3) overfitting phenomenon caused by sparsely labeled samples. To overcome the above barriers, a transformer-based heterogeneously salient graph representation (THSGR) approach is proposed in this paper. First, a multimodal heterogeneous graph encoder is presented to encode distinctively non-Euclidean structural features from heterogeneous data. Then, a self-attention-free multi-convolutional modulator is designed for effective and efficient long-term dependency modeling. Finally, a mean forward is put forward in order to avoid overfitting. Based on the above structures, the proposed model is able to break through modal gaps to obtain differentiated graph representation with competitive time cost, even for a small fraction of training samples. Experiments and analyses on three benchmark datasets with various state-of-the-art (SOTA) methods show the performance of the proposed approach

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Classification of cloudy hyperspectral image and LiDAR data based on feature fusion and decision fusion

    No full text
    Hyperspectral and LiDAR data, can provide plentiful information about the objects on the Earths surface. However there are some shortages for each of them, where hyperspectral sensor is easily influenced by cloud and difficult to distinguish different objects contained same materials, LiDAR cannot discriminate different objects which are similar in altitude. Fusion of these multi-source data for reliable classification attracts increasing interests but remains challenging. In this paper, we propose a new framework to fuse multi-source data for classification. The proposed method contains three main works: 1) cloud shadows extraction; 2) feature fusion of spectral and spatial information extracted from hyperspectral image, elevation information extracted from LiDAR data; 3) decision fusion of cloud and non-cloud regions. Experimental results on real HSI and LiDAR data demonstrate effectiveness of the proposed method both visually and quantitatively
    corecore