110 research outputs found

    SAR ATR Method with Limited Training Data via an Embedded Feature Augmenter and Dynamic Hierarchical-Feature Refiner

    Full text link
    Without sufficient data, the quantity of information available for supervised training is constrained, as obtaining sufficient synthetic aperture radar (SAR) training data in practice is frequently challenging. Therefore, current SAR automatic target recognition (ATR) algorithms perform poorly with limited training data availability, resulting in a critical need to increase SAR ATR performance. In this study, a new method to improve SAR ATR when training data are limited is proposed. First, an embedded feature augmenter is designed to enhance the extracted virtual features located far away from the class center. Based on the relative distribution of the features, the algorithm pulls the corresponding virtual features with different strengths toward the corresponding class center. The designed augmenter increases the amount of information available for supervised training and improves the separability of the extracted features. Second, a dynamic hierarchical-feature refiner is proposed to capture the discriminative local features of the samples. Through dynamically generated kernels, the proposed refiner integrates the discriminative local features of different dimensions into the global features, further enhancing the inner-class compactness and inter-class separability of the extracted features. The proposed method not only increases the amount of information available for supervised training but also extracts the discriminative features from the samples, resulting in superior ATR performance in problems with limited SAR training data. Experimental results on the moving and stationary target acquisition and recognition (MSTAR), OpenSARShip, and FUSAR-Ship benchmark datasets demonstrate the robustness and outstanding ATR performance of the proposed method in response to limited SAR training data

    Hierarchical Disentanglement-Alignment Network for Robust SAR Vehicle Recognition

    Full text link
    Vehicle recognition is a fundamental problem in SAR image interpretation. However, robustly recognizing vehicle targets is a challenging task in SAR due to the large intraclass variations and small interclass variations. Additionally, the lack of large datasets further complicates the task. Inspired by the analysis of target signature variations and deep learning explainability, this paper proposes a novel domain alignment framework named the Hierarchical Disentanglement-Alignment Network (HDANet) to achieve robustness under various operating conditions. Concisely, HDANet integrates feature disentanglement and alignment into a unified framework with three modules: domain data generation, multitask-assisted mask disentanglement, and domain alignment of target features. The first module generates diverse data for alignment, and three simple but effective data augmentation methods are designed to simulate target signature variations. The second module disentangles the target features from background clutter using the multitask-assisted mask to prevent clutter from interfering with subsequent alignment. The third module employs a contrastive loss for domain alignment to extract robust target features from generated diverse data and disentangled features. Lastly, the proposed method demonstrates impressive robustness across nine operating conditions in the MSTAR dataset, and extensive qualitative and quantitative analyses validate the effectiveness of our framework

    Feedback-assisted automatic target and clutter discrimination using a Bayesian convolutional neural network for improved explainability in SAR applications

    Get PDF
    DATA AVAILABILITY STATEMENT : The NATO-SET 250 dataset is not publicly available; however, the MSTAR dataset can be found at the following url: https://www.sdms.afrl.af.mil/index.php?collection=mstar (accessed on 5 January 2022).In this paper, a feedback training approach for efficiently dealing with distribution shift in synthetic aperture radar target detection using a Bayesian convolutional neural network is proposed. After training the network on in-distribution data, it is tested on out-of-distribution data. Samples that are classified incorrectly with high certainty are fed back for a second round of training. This results in the reduction of false positives in the out-of-distribution dataset. False positive target detections challenge human attention, sensor resource management, and mission engagement. In these types of applications, a reduction in false positives thus often takes precedence over target detection and classification performance. The classifier is used to discriminate the targets from the clutter and to classify the target type in a single step as opposed to the traditional approach of having a sequential chain of functions for target detection and localisation before the machine learning algorithm. Another aspect of automated synthetic aperture radar detection and recognition problems addressed here is the fact that human users of the output of traditional classification systems are presented with decisions made by “black box” algorithms. Consequently, the decisions are not explainable, even to an expert in the sensor domain. This paper makes use of the concept of explainable artificial intelligence via uncertainty heat maps that are overlaid onto synthetic aperture radar imagery to furnish the user with additional information about classification decisions. These uncertainty heat maps facilitate trust in the machine learning algorithm and are derived from the uncertainty estimates of the classifications from the Bayesian convolutional neural network. These uncertainty overlays further enhance the users’ ability to interpret the reasons why certain decisions were made by the algorithm. Further, it is demonstrated that feeding back the high-certainty, incorrectly classified out-of-distribution data results in an average improvement in detection performance and a reduction in uncertainty for all synthetic aperture radar images processed. Compared to the baseline method, an improvement in recall of 11.8%, and a reduction in the false positive rate of 7.08% were demonstrated using the Feedback-assisted Bayesian Convolutional Neural Network or FaBCNN.The Radar and Electronic Warfare department at the CSIR.http://www.mdpi.com/journal/remotesensinghj2023Electrical, Electronic and Computer Engineerin

    NASA Adaptive Multibeam Phased Array (AMPA): An application study

    Get PDF
    The proposed orbital geometry for the adaptive multibeam phased array (AMPA) communication system is reviewed and some of the system's capabilities and preliminary specifications are highlighted. Typical AMPA user link models and calculations are presented, the principal AMPA features are described, and the implementation of the system is demonstrated. System tradeoffs and requirements are discussed. Recommendations are included

    Analysis of nuclear waste disposal in space, phase 3. Volume 2: Technical report

    Get PDF
    The options, reference definitions and/or requirements currently envisioned for the total nuclear waste disposal in space mission are summarized. The waste form evaluation and selection process is documented along with the physical characteristics of the iron nickel-base cermet matrix chosen for disposal of commercial and defense wastes. Safety aspects of radioisotope thermal generators, the general purpose heat source, and the Lewis Research Center concept for space disposal are assessed as well as the on-pad catastrophic accident environments for the uprated space shuttle and the heavy lift launch vehicle. The radionuclides that contribute most to long-term risk of terrestrial disposal were determined and the effects of resuspension of fallout particles from an accidental release of waste material were studied. Health effects are considered. Payload breakup and rescue technology are discussed as well as expected requirements for licensing, supporting research and technology, and safety testing
    corecore