53,655 research outputs found
Nearest Neighbor Based Out-of-Distribution Detection in Remote Sensing Scene Classification
Deep learning models for image classification are typically trained under the
"closed-world" assumption with a predefined set of image classes. However, when
the models are deployed they may be faced with input images not belonging to
the classes encountered during training. This type of scenario is common in
remote sensing image classification where images come from different geographic
areas, sensors, and imaging conditions. In this paper we deal with the problem
of detecting remote sensing images coming from a different distribution
compared to the training data - out of distribution images. We propose a
benchmark for out of distribution detection in remote sensing scene
classification and evaluate detectors based on maximum softmax probability and
nearest neighbors. The experimental results show convincing advantages of the
method based on nearest neighbors.Comment: 2023 22nd International Symposium INFOTEH-JAHORIN
Fused LISS IV Image Classification using Deep Convolution Neural Networks
These days, earth observation frameworks give a large number of heterogeneous remote sensing information. The most effective method to oversee such fulsomeness in utilizing its reciprocity is a vital test in current remote sensing investigation. Considering optical Very High Spatial Resolution (VHSR) images, satellites acquire both Multi Spectral (MS) and panchromatic (PAN) images at various spatial goals. Information fusion procedures manage this by proposing a technique to consolidate reciprocity among the various information sensors. Classification of remote sensing image by Deep learning techniques using Convolutional Neural Networks (CNN) is increasing a solid decent footing because of promising outcomes. The most significant attribute of CNN-based strategies is that earlier element extraction is not required which prompts great speculation capacities. In this article, we are proposing a novel Deep learning based SMDTR-CNN (Same Model with Different Training Round with Convolution Neural Network) approach for classifying fused (LISS IV + PAN) image next to image fusion. The fusion of remote sensing images from CARTOSAT-1 (PAN image) and IRS P6 (LISS IV image) sensor is obtained by Quantization Index Modulation with Discrete Contourlet Transform (QIM-DCT). For enhancing the image fusion execution, we remove specific commotions utilizing Bayesian channel by Adaptive Type-2 Fuzzy System. The outcomes of the proposed procedures are evaluated with respect to precision, classification accuracy and kappa coefficient. The results revealed that SMDTR-CNN with Deep Learning got the best all-around precision and kappa coefficient. Likewise, the accuracy of each class of fused images in LISS IV + PAN dataset is improved by 2% and 5%, respectively
Backdoor Attacks for Remote Sensing Data with Wavelet Transform
Recent years have witnessed the great success of deep learning algorithms in
the geoscience and remote sensing realm. Nevertheless, the security and
robustness of deep learning models deserve special attention when addressing
safety-critical remote sensing tasks. In this paper, we provide a systematic
analysis of backdoor attacks for remote sensing data, where both scene
classification and semantic segmentation tasks are considered. While most of
the existing backdoor attack algorithms rely on visible triggers like squared
patches with well-designed patterns, we propose a novel wavelet transform-based
attack (WABA) method, which can achieve invisible attacks by injecting the
trigger image into the poisoned image in the low-frequency domain. In this way,
the high-frequency information in the trigger image can be filtered out in the
attack, resulting in stealthy data poisoning. Despite its simplicity, the
proposed method can significantly cheat the current state-of-the-art deep
learning models with a high attack success rate. We further analyze how
different trigger images and the hyper-parameters in the wavelet transform
would influence the performance of the proposed method. Extensive experiments
on four benchmark remote sensing datasets demonstrate the effectiveness of the
proposed method for both scene classification and semantic segmentation tasks
and thus highlight the importance of designing advanced backdoor defense
algorithms to address this threat in remote sensing scenarios. The code will be
available online at \url{https://github.com/ndraeger/waba}
Deep learning for geometric and semantic tasks in photogrammetry and remote sensing
During the last few years, artificial intelligence based on deep learning, and particularly based on convolutional neural networks, has acted as a game changer in just about all tasks related to photogrammetry and remote sensing. Results have shown partly significant improvements in many projects all across the photogrammetric processing chain from image orientation to surface reconstruction, scene classification as well as change detection, object extraction and object tracking and recognition in image sequences. This paper summarizes the foundations of deep learning for photogrammetry and remote sensing before illustrating, by way of example, different projects being carried out at the Institute of Photogrammetry and GeoInformation, Leibniz University Hannover, in this exciting and fast moving field of research and development
Metric-based Few-shot Classification in Remote Sensing Image
Target recognition based on deep learning relies on a large quantity of samples, but in some specific remote sensing scenes, the samples are very rare. Currently, few-shot learning can obtain high-performance target classification models using only a few samples, but most researches are based on the natural scene. Therefore, this paper proposes a metric-based few-shot classification technology in remote sensing. First, we constructed a dataset (RSD-FSC) for few-shot classification in remote sensing, which contained 21 classes typical target sample slices of remote sensing images. Second, based on metric learning, a k-nearest neighbor classification network is proposed, to find multiple training samples similar to the testing target, and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target. Finally, the 5-way 1-shot, 5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks. The experimental results show that for the newly emerged classes few-shot samples, when the number of training samples is 1, 5 and 10, the average accuracy of target recognition can reach 59.134%, 82.553% and 87.796%, respectively. It demonstrates that our proposed method can resolve fewshot classification in remote sensing image and perform better than other few-shot classification methods
Recommended from our members
Classification of Remote Sensing Image Data Using Rsscn-7 Dataset
A novel technique for remote sensing image scene classification is employed using the Compact Vision Transformer (CVT) architecture. This model strengthens the power of deep learning and self-attention algorithms to significantly intensify the accuracy and efficiency of scene classification in remote sensing imagery. Through extensive training and evaluation of the RSSCNN7 dataset, our CVT-based model has achieved an impressive accuracy rate of 87.46% on the original dataset. This remarkable result underscores the prospect of CVT models in the domain of remote sensing and underscores their applicability in real-world scenarios. Our report furnishes an elaborate account of the model\u27s architecture, training methodology, and evaluation process, shedding light on the key insights and advancements in remote sensing image analysis. This work holds promise for a variety of applications, including agriculture, environmental surveillance, and disaster control, where precise scene classification is of utmost importance
- …