57 research outputs found

    Sparse Signal Models for Data Augmentation in Deep Learning ATR

    Full text link
    Automatic Target Recognition (ATR) algorithms classify a given Synthetic Aperture Radar (SAR) image into one of the known target classes using a set of training images available for each class. Recently, learning methods have shown to achieve state-of-the-art classification accuracy if abundant training data is available, sampled uniformly over the classes, and their poses. In this paper, we consider the task of ATR with a limited set of training images. We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm, such as a Convolutional neural network (CNN). The proposed data augmentation method employs a limited persistence sparse modeling approach, capitalizing on commonly observed characteristics of wide-angle synthetic aperture radar (SAR) imagery. Specifically, we exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting. Using this estimated model, we synthesize new images at poses and sub-pixel translations not available in the given data to augment CNN's training data. The experimental results show that for the training data starved region, the proposed method provides a significant gain in the resulting ATR algorithm's generalization performance.Comment: 12 pages, 5 figures, to be submitted to IEEE Transactions on Geoscience and Remote Sensin

    Hierarchical Disentanglement-Alignment Network for Robust SAR Vehicle Recognition

    Full text link
    Vehicle recognition is a fundamental problem in SAR image interpretation. However, robustly recognizing vehicle targets is a challenging task in SAR due to the large intraclass variations and small interclass variations. Additionally, the lack of large datasets further complicates the task. Inspired by the analysis of target signature variations and deep learning explainability, this paper proposes a novel domain alignment framework named the Hierarchical Disentanglement-Alignment Network (HDANet) to achieve robustness under various operating conditions. Concisely, HDANet integrates feature disentanglement and alignment into a unified framework with three modules: domain data generation, multitask-assisted mask disentanglement, and domain alignment of target features. The first module generates diverse data for alignment, and three simple but effective data augmentation methods are designed to simulate target signature variations. The second module disentangles the target features from background clutter using the multitask-assisted mask to prevent clutter from interfering with subsequent alignment. The third module employs a contrastive loss for domain alignment to extract robust target features from generated diverse data and disentangled features. Lastly, the proposed method demonstrates impressive robustness across nine operating conditions in the MSTAR dataset, and extensive qualitative and quantitative analyses validate the effectiveness of our framework

    Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion

    Get PDF
    Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics

    TAI-SARNET: Deep Transferred Atrous-Inception CNN for Small Samples SAR ATR

    Get PDF
    Since Synthetic Aperture Radar (SAR) targets are full of coherent speckle noise, the traditional deep learning models are difficult to effectively extract key features of the targets and share high computational complexity. To solve the problem, an effective lightweight Convolutional Neural Network (CNN) model incorporating transfer learning is proposed for better handling SAR targets recognition tasks. In this work, firstly we propose the Atrous-Inception module, which combines both atrous convolution and inception module to obtain rich global receptive fields, while strictly controlling the parameter amount and realizing lightweight network architecture. Secondly, the transfer learning strategy is used to effectively transfer the prior knowledge of the optical, non-optical, hybrid optical and non-optical domains to the SAR target recognition tasks, thereby improving the model\u2019s recognition performance on small sample SAR target datasets. Finally, the model constructed in this paper is verified to be 97.97% on ten types of MSTAR datasets under standard operating conditions, reaching a mainstream target recognition rate. Meanwhile, the method presented in this paper shows strong robustness and generalization performance on a small number of randomly sampled SAR target datasets

    Feedback-assisted automatic target and clutter discrimination using a Bayesian convolutional neural network for improved explainability in SAR applications

    Get PDF
    DATA AVAILABILITY STATEMENT : The NATO-SET 250 dataset is not publicly available; however, the MSTAR dataset can be found at the following url: https://www.sdms.afrl.af.mil/index.php?collection=mstar (accessed on 5 January 2022).In this paper, a feedback training approach for efficiently dealing with distribution shift in synthetic aperture radar target detection using a Bayesian convolutional neural network is proposed. After training the network on in-distribution data, it is tested on out-of-distribution data. Samples that are classified incorrectly with high certainty are fed back for a second round of training. This results in the reduction of false positives in the out-of-distribution dataset. False positive target detections challenge human attention, sensor resource management, and mission engagement. In these types of applications, a reduction in false positives thus often takes precedence over target detection and classification performance. The classifier is used to discriminate the targets from the clutter and to classify the target type in a single step as opposed to the traditional approach of having a sequential chain of functions for target detection and localisation before the machine learning algorithm. Another aspect of automated synthetic aperture radar detection and recognition problems addressed here is the fact that human users of the output of traditional classification systems are presented with decisions made by “black box” algorithms. Consequently, the decisions are not explainable, even to an expert in the sensor domain. This paper makes use of the concept of explainable artificial intelligence via uncertainty heat maps that are overlaid onto synthetic aperture radar imagery to furnish the user with additional information about classification decisions. These uncertainty heat maps facilitate trust in the machine learning algorithm and are derived from the uncertainty estimates of the classifications from the Bayesian convolutional neural network. These uncertainty overlays further enhance the users’ ability to interpret the reasons why certain decisions were made by the algorithm. Further, it is demonstrated that feeding back the high-certainty, incorrectly classified out-of-distribution data results in an average improvement in detection performance and a reduction in uncertainty for all synthetic aperture radar images processed. Compared to the baseline method, an improvement in recall of 11.8%, and a reduction in the false positive rate of 7.08% were demonstrated using the Feedback-assisted Bayesian Convolutional Neural Network or FaBCNN.The Radar and Electronic Warfare department at the CSIR.http://www.mdpi.com/journal/remotesensinghj2023Electrical, Electronic and Computer Engineerin

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Spatial Modeling of Compact Polarimetric Synthetic Aperture Radar Imagery

    Get PDF
    The RADARSAT Constellation Mission (RCM) utilizes compact polarimetric (CP) mode to provide data with varying resolutions, supporting a wide range of applications including oil spill detection, sea ice mapping, and land cover analysis. However, the complexity and variability of CP data, influenced by factors such as weather conditions and satellite infrastructure, introduce signature ambiguity. This ambiguity poses challenges in accurate object classification, reducing discriminability and increasing uncertainty. To address these challenges, this thesis introduces tailored spatial models in CP SAR imagery through the utilization of machine learning techniques. Firstly, to enhance oil spill monitoring, a novel conditional random field (CRF) is introduced. The CRF model leverages the statistical properties of CP SAR data and exploits similarities in labels and features among neighboring pixels to effectively model spatial interactions. By mitigating the impact of speckle noise and accurately distinguishing oil spill candidates from oil-free water, the CRF model achieves successful results even in scenarios where the availability of labeled samples is limited. This highlights the capability of CRF in handling situations with a scarcity of training data. Secondly, to improve the accuracy of sea ice mapping, a region-based automated classification methodology is developed. This methodology incorporates learned features, spatial context, and statistical properties from various SAR modes, resulting in enhanced classification accuracy and improved algorithmic efficiency. Thirdly, the presence of a high degree of heterogeneity in target distribution presents an additional challenge in land cover mapping tasks, further compounded by signature ambiguity. To address this, a novel transformer model is proposed. The transformer model incorporates both fine- and coarse-grained spatial dependencies between pixels and leverages different levels of features to enhance the accuracy of land cover type detection. The proposed approaches have undergone extensive experimentation in various remote sensing tasks, validating their effectiveness. By introducing tailored spatial models and innovative algorithms, this thesis successfully addresses the inherent complexity and variability of CP data, thereby ensuring the accuracy and reliability of diverse applications in the field of remote sensing
    • …
    corecore