2,242 research outputs found

    Feature extraction and fusion for classification of remote sensing imagery

    Get PDF

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Historical forest biomass dynamics modelled with Landsat spectral trajectories

    Get PDF
    Acknowledgements National Forest Inventory data are available online, provided by Ministerio de Agricultura, Alimentación y Medio Ambiente (España). Landsat images are available online, provided by the USGS.Peer reviewedPostprin

    Multi-source hierarchical conditional random field model for feature fusion of remote sensing images and LiDAR data

    Get PDF
    Feature fusion of remote sensing images and LiDAR points cloud data, which have strong complementarity, can effectively play the advantages of multi-class features to provide more reliable information support for the remote sensing applications, such as object classification and recognition. In this paper, we introduce a novel multi-source hierarchical conditional random field (MSHCRF) model to fuse features extracted from remote sensing images and LiDAR data for image classification. Firstly, typical features are selected to obtain the interest regions from multi-source data, then MSHCRF model is constructed to exploit up the features, category compatibility of images and the category consistency of multi-source data based on the regions, and the outputs of the model represents the optimal results of the image classification. Competitive results demonstrate the precision and robustness of the proposed method

    Fusion of Hyperspectral and LiDAR Data Using Sparse and Low-Rank Component Analysis

    Get PDF
    The availability of diverse data captured over the same region makes it possible to develop multisensor data fusion techniques to further improve the discrimination ability of classifiers. In this paper, a new sparse and low-rank technique is proposed for the fusion of hyperspectral and light detection and ranging (LiDAR)-derived features. The proposed fusion technique consists of two main steps. First, extinction profiles are used to extract spatial and elevation information from hyperspectral and LiDAR data, respectively. Then, the sparse and low-rank technique is utilized to estimate the low-rank fused features from the extracted ones that are eventually used to produce a final classification map. The proposed approach is evaluated over an urban data set captured over Houston, USA, and a rural one captured over Trento, Italy. Experimental results confirm that the proposed fusion technique outperforms the other techniques used in the experiments based on the classification accuracies obtained by random forest and support vector machine classifiers. Moreover, the proposed approach can effectively classify joint LiDAR and hyperspectral data in an ill-posed situation when only a limited number of training samples are available

    EARLINET evaluation of the CATS Level 2 aerosol backscatter coefficient product

    Get PDF
    We present the evaluation activity of the European Aerosol Research Lidar Network (EARLINET) for the quantitative assessment of the Level 2 aerosol backscatter coefficient product derived by the Cloud-Aerosol Transport System (CATS) aboard the International Space Station (ISS; Rodier et al., 2015). The study employs correlative CATS and EARLINET backscatter measurements within a 50¿km distance between the ground station and the ISS overpass and as close in time as possible, typically with the starting time or stopping time of the EARLINET performed measurement time window within 90¿min of the ISS overpass, for the period from February 2015 to September 2016. The results demonstrate the good agreement of the CATS Level 2 backscatter coefficient and EARLINET. Three ISS overpasses close to the EARLINET stations of Leipzig, Germany; Évora, Portugal; and Dushanbe, Tajikistan, are analyzed here to demonstrate the performance of the CATS lidar system under different conditions. The results show that under cloud-free, relative homogeneous aerosol conditions, CATS is in good agreement with EARLINET, independent of daytime and nighttime conditions. CATS low negative biases are observed, partially attributed to the deficiency of lidar systems to detect tenuous aerosol layers of backscatter signal below the minimum detection thresholds; these are biases which may lead to systematic deviations and slight underestimations of the total aerosol optical depth (AOD) in climate studies. In addition, CATS misclassification of aerosol layers as clouds, and vice versa, in cases of coexistent and/or adjacent aerosol and cloud features, occasionally leads to non-representative, unrealistic, and cloud-contaminated aerosol profiles. Regarding solar illumination conditions, low negative biases in CATS backscatter coefficient profiles, of the order of 6.1¿%, indicate the good nighttime performance of CATS. During daytime, a reduced signal-to-noise ratio by solar background illumination prevents retrievals of weakly scattering atmospheric layers that would otherwise be detectable during nighttime, leading to higher negative biases, of the order of 22.3¿%.Peer ReviewedPostprint (published version

    Mapping Chestnut Stands Using Bi-Temporal VHR Data

    Get PDF
    This study analyzes the potential of very high resolution (VHR) remote sensing images and extended morphological profiles for mapping Chestnut stands on Tenerife Island (Canary Islands, Spain). Regarding their relevance for ecosystem services in the region (cultural and provisioning services) the public sector demand up-to-date information on chestnut and a simple straight-forward approach is presented in this study. We used two VHR WorldView images (March and May 2015) to cover different phenological phases. Moreover, we included spatial information in the classification process by extended morphological profiles (EMPs). Random forest is used for the classification process and we analyzed the impact of the bi-temporal information as well as of the spatial information on the classification accuracies. The detailed accuracy assessment clearly reveals the benefit of bi-temporal VHR WorldView images and spatial information, derived by EMPs, in terms of the mapping accuracy. The bi-temporal classification outperforms or at least performs equally well when compared to the classification accuracies achieved by the mono-temporal data. The inclusion of spatial information by EMPs further increases the classification accuracy by 5% and reduces the quantity and allocation disagreements on the final map. Overall the new proposed classification strategy proves useful for mapping chestnut stands in a heterogeneous and complex landscape, such as the municipality of La Orotava, Tenerife

    More Diverse Means Better: Multimodal Deep Learning Meets Remote Sensing Imagery Classification

    Full text link
    Classification and identification of the materials lying over or beneath the Earth's surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS) and have garnered a growing concern owing to the recent advancements of deep learning techniques. Although deep networks have been successfully applied in single-modality-dominated classification tasks, yet their performance inevitably meets the bottleneck in complex scenes that need to be finely classified, due to the limitation of information diversity. In this work, we provide a baseline solution to the aforementioned difficulty by developing a general multimodal deep learning (MDL) framework. In particular, we also investigate a special case of multi-modality learning (MML) -- cross-modality learning (CML) that exists widely in RS image classification applications. By focusing on "what", "where", and "how" to fuse, we show different fusion strategies as well as how to train deep networks and build the network architecture. Specifically, five fusion architectures are introduced and developed, further being unified in our MDL framework. More significantly, our framework is not only limited to pixel-wise classification tasks but also applicable to spatial information modeling with convolutional neural networks (CNNs). To validate the effectiveness and superiority of the MDL framework, extensive experiments related to the settings of MML and CML are conducted on two different multimodal RS datasets. Furthermore, the codes and datasets will be available at https://github.com/danfenghong/IEEE_TGRS_MDL-RS, contributing to the RS community
    corecore