3 research outputs found
Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion
Driven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics
Recommended from our members
Fusing Deep Learning and Sparse Coding for SAR ATR
We propose a multimodal and multidiscipline data fusion strategy appropriate for automatic target recognition (ATR) on synthetic aperture radar imagery. Our architecture fuses a proposed clustered version of the AlexNet convolutional neural network with sparse coding theory that is extended to facilitate an adaptive elastic net optimization concept. Evaluation on the MSTAR dataset yields the highest ATR performance reported yet, which is 99.33% and 99.86% for the three- and ten-class problems, respectively