66 research outputs found
Prototypical Unknown-Aware Multiview Consistency Learning for Open-Set Cross-Domain Remote Sensing Image Classification
Developing a cross-domain classification model for remote sensing images has drawn significant attention in the literature. By leveraging the open-set unsupervised domain adaptation (UDA) technique, the generalization performance of deep learning models has been improved with the capability to recognize unknown categories. However, it remains challenging to explore distribution patterns in the target domain using uncertain category-wise supervision from unlabeled datasets while reducing negative transfer caused by unknown samples. To develop a robust open-set UDA framework, this article presents prototypical unknown-aware multiview consistency learning (PUMCL) designed for remote sensing scene classification across heterogeneous domains. Specifically, it employs a consistency learning scheme with multiview and multilevel perturbations to improve feature learning from unlabeled target samples. An entropy separation strategy is utilized to facilitate open-set detection and recognition during adaptation, enabling unknown-aware feature alignment. Furthermore, the introduction of prototypical constraints optimizes pseudo-label generation through online denoising and promotes a compact category-wise feature subspace for improved class separation across domains. Experiments conducted on six cross-domain scenarios using AID, NWPU, and UCMD datasets demonstrate the method’s superior performance compared to nine state-of-the-art approaches, achieving a gain of 4.5% to 21.2% in mIoU. More importantly, it shows promising class separability with clear boundaries between different classes and compact clustering of unknown samples in the feature space. The source code will be available at https://github.com/zxk688
Open Data for Global Multimodal Land Use Classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest
In this paper, we present the scientific outcomes of the 2017 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society. The 2017 Contest was aimed at addressing the problem of local climate zones classification based on a multitemporal and multimodal dataset, including image (Landsat 8 and Sentinel-2) and vector data (from OpenStreetMap). The competition, based on separate geographical locations for the training and testing of the proposed solution, aimed at models that were accurate (assessed by accuracy metrics on an undisclosed reference for the test cities), general (assessed by spreading the test cities across the globe), and computationally feasible (assessed by having a test phase of limited time). The techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and of mixed ideas and methodologies deriving from computer vision and machine learning but also deeply rooted in the specificities of remote sensing. In particular, rigorous atmospheric correction, the use of multidate images, and the use of ensemble methods fusing results obtained from different data sources/time instants made the difference
Data science in economics: Comprehensive review of advanced machine learning and deep learning methods
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This paper provides a comprehensive state-of-the-art investigation of the recent advances in data science in emerging economic applications. The analysis is performed on the novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a broad and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, is used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which outperform other learning algorithms. It is further expected that the trends will converge toward the evolution of sophisticated hybrid deep learning models
Reply to: Comments on “Particle Swarm Optimization with Fractional-Order Velocity”
We agree with Ling-Yun et al. [5] and Zhang and Duan comments [2] about the typing error in equation (9) of the manuscript [8]. The correct formula was initially proposed in [6, 7]. The formula adopted in our algorithms discussed in our papers [1, 3, 4, 8] is, in fact, the following: ..
Multisource and multitemporal data fusion in remote sensing:A comprehensive review of the state of the art
The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several
Airborne Object Detection Using Hyperspectral Imaging: Deep Learning Review
© 2019, Springer Nature Switzerland AG. Hyperspectral images have been increasingly important in object detection applications especially in remote sensing scenarios. Machine learning algorithms have become emerging tools for hyperspectral image analysis. The high dimensionality of hyperspectral images and the availability of simulated spectral sample libraries make deep learning an appealing approach. This report reviews recent data processing and object detection methods in the area including hand-crafted and automated feature extraction based on deep learning neural networks. The accuracy performances were compared according to existing reports as well as our own experiments (i.e., re-implementing and testing on new datasets). CNN models provided reliable performance of over 97% detection accuracy across a large set of HSI collections. A wide range of data were used: a rural area (Indian Pines data), an urban area (Pavia University), a wetland region (Botswana), an industrial field (Kennedy Space Center), to a farm site (Salinas). Note that, the Botswana set was not reviewed in recent works, thus high accuracy selected methods were newly compared in this work. A plain CNN model was also found to be able to perform comparably to its more complex variants in target detection applications
Hyperspectral {Data} {Classification} {Using} {Extended} {Extinction} {Profiles}
This letter proposes a new approach for the spectral-spatial classification of hyperspectral images, which is based on a novel extrema-oriented connected filtering technique, entitled as extended extinction profiles. The proposed approach progressively simplifies the first informative features extracted from hyperspectral data considering different attributes. Then, the classification approach is applied on two well-known hyperspectral data sets, i.e., Pavia University and Indian Pines, and compared with one of the most powerful filtering approaches in the literature, i.e., extended attribute profiles. Results indicate that the proposed approach is able to efficiently extract spatial information for the classification of hyperspectral images automatically and swiftly. In addition, an array-based node-oriented max-tree representation was carried out to efficiently implement the proposed approach
Hyperspectral and {LiDAR} {Data} {Fusion} {Using} {Extinction} {Profiles} and {Deep} {Convolutional} {Neural} {Network}
This paper proposes a novel framework for the fusion of hyperspectral and light detection and ranging-derived rasterized data using extinction profiles (EPs) and deep learning. In order to extract spatial and elevation information from both the sources, EPs that include different attributes (e.g., height, area, volume, diagonal of the bounding box, and standard deviation) are taken into account. Then, the derived features are fused via either feature stacking or graph-based feature fusion. Finally, the fused features are fed to a deep learning-based classifier (convolutional neural network with logistic regression) to ultimately produce the classification map. The proposed approach is applied to two datasets acquired in Houston, TX, USA, and Trento, Italy. Results indicate that the proposed approach can achieve accurate classification results compared to other approaches. It should be noted that, in this paper, the concept of deep learning has been used for the first time to fuse LiDAR and hyperspectral features, which provides new opportunities for further research
- …