317 research outputs found

    Deep Learning Techniques in Radar Emitter Identification

    Get PDF
    In the field of electronic warfare (EW), one of the crucial roles of electronic intelligence is the identification of radar signals. In an operational environment, it is very essential to identify radar emitters whether friend or foe so that appropriate radar countermeasures can be taken against them. With the electromagnetic environment becoming increasingly complex and the diversity of signal features, radar emitter identification with high recognition accuracy has become a significantly challenging task. Traditional radar identification methods have shown some limitations in this complex electromagnetic scenario. Several radar classification and identification methods based on artificial neural networks have emerged with the emergence of artificial neural networks, notably deep learning approaches. Machine learning and deep learning algorithms are now frequently utilized to extract various types of information from radar signals more accurately and robustly. This paper illustrates the use of Deep Neural Networks (DNN) in radar applications for emitter classification and identification. Since deep learning approaches are capable of accurately classifying complicated patterns in radar signals, they have demonstrated significant promise for identifying radar emitters. By offering a thorough literature analysis of deep learning-based methodologies, the study intends to assist researchers and practitioners in better understanding the application of deep learning techniques to challenges related to the classification and identification of radar emitters. The study demonstrates that DNN can be used successfully in applications for radar classification and identification.   &nbsp

    Radio frequency fingerprint collaborative intelligent identification using incremental learning

    Get PDF
    For distributed sensor systems using neural networks, each sub-network has a different electromagnetic environment, and these recognition accuracy is also different. In this paper, we propose a distributed sensor system using incremental learning to solve the problem of radio frequency fingerprint identification. First, the intelligent representation of the received signal is linearly fused into a four-channel image. Then, convolutional neural network is trained by using the existing data to obtain the preliminary model of the network, and decision fusion is used to solve the problem in the distributed system. Finally, using new data, instead of retraining the model, we employ incremental learning by fine-tuning the preliminary model. The proposed method can significantly reduce the training time and is adaptive to streaming data. Extensive experiments show that the proposed method is computationally efficient, and also has satisfactory recognition accuracy, especially at low signal-to-noise ratio (SNR) regime

    CubeLearn:End-to-end Learning for Human Motion Recognition from Raw mmWave Radar Signals

    Get PDF

    Learning Robust Radio Frequency Fingerprints Using Deep Convolutional Neural Networks

    Get PDF
    Radio Frequency Fingerprinting (RFF) techniques, which attribute uniquely identifiable signal distortions to emitters via Machine Learning (ML) classifiers, are limited by fingerprint variability under different operational conditions. First, this work studied the effect of frequency channel for typical RFF techniques. Performance characterization using the multi-class Matthews Correlation Coefficient (MCC) revealed that using frequency channels other than those used to train the models leads to deterioration in MCC to under 0.05 (random guess), indicating that single-channel models are inadequate for realistic operation. Second, this work presented a novel way of studying fingerprint variability through Fingerprint Extraction through Distortion Reconstruction (FEDR), a neural network-based approach for quantifying signal distortions in a relative distortion latent space. Coupled with a Dense network, FEDR fingerprints were evaluated against common RFF techniques for up to 100 unseen classes, where FEDR achieved best performance with MCC ranging from 0.945 (5 classes) to 0.746 (100 classes), using 73% fewer training parameters than the next-best technique

    Modulation recognition of low-SNR UAV radar signals based on bispectral slices and GA-BP neural network

    Get PDF
    In this paper, we address the challenge of low recognition rates in existing methods for radar signals from unmanned aerial vehicles (UAV) with low signal-to-noise ratios (SNRs). To overcome this challenge, we propose the utilization of the bispectral slice approach for accurate recognition of complex UAV radar signals. Our approach involves extracting the bispectral diagonal slice and the maximum bispectral amplitude horizontal slice from the bispectrum amplitude spectrum of the received UAV radar signal. These slices serve as the basis for subsequent identification by calculating characteristic parameters such as convexity, box dimension, and sparseness. To accomplish the recognition task, we employ a GA-BP neural network. The significant variations observed in the bispectral slices of different signals, along with their robustness against Gaussian noise, contribute to the high separability and stability of the extracted bispectral convexity, bispectral box dimension, and bispectral sparseness. Through simulations involving five radar signals, our proposed method demonstrates superior performance. Remarkably, even under challenging conditions with an SNR as low as −3 dB, the recognition accuracy for the five different radar signals exceeds 90%. Our research aims to enhance the understanding and application of modulation recognition techniques for UAV radar signals, particularly in scenarios with low SNRs

    Radar intra–pulse signal modulation classification with contrastive learning

    Get PDF
    The existing research on deep learning for radar signal intra–pulse modulation classification is mainly based on supervised leaning techniques, which performance mainly relies on a large number of labeled samples. To overcome this limitation, a self–supervised leaning framework, contrastive learning (CL), combined with the convolutional neural network (CNN) and focal loss function is proposed, called CL––CNN. A two–stage training strategy is adopted by CL–CNN. In the first stage, the model is pretrained using abundant unlabeled time–frequency images, and data augmentation is used to introduce positive–pair and negative–pair samples for self–supervised learning. In the second stage, the pretrained model is fine–tuned for classification, which only uses a small number of labeled time–frequency images. The simulation results demonstrate that CL–CNN outperforms the other deep models and traditional methods in scenarios with Gaussian noise and impulsive noise–affected signals, respectively. In addition, the proposed CL–CNN also shows good generalization ability, i.e., the model pretrained with Gaussian noise–affected samples also performs well on impulsive noise–affected samples
    • …
    corecore