1,267 research outputs found

    Deep Learning Techniques in Radar Emitter Identification

    Get PDF
    In the field of electronic warfare (EW), one of the crucial roles of electronic intelligence is the identification of radar signals. In an operational environment, it is very essential to identify radar emitters whether friend or foe so that appropriate radar countermeasures can be taken against them. With the electromagnetic environment becoming increasingly complex and the diversity of signal features, radar emitter identification with high recognition accuracy has become a significantly challenging task. Traditional radar identification methods have shown some limitations in this complex electromagnetic scenario. Several radar classification and identification methods based on artificial neural networks have emerged with the emergence of artificial neural networks, notably deep learning approaches. Machine learning and deep learning algorithms are now frequently utilized to extract various types of information from radar signals more accurately and robustly. This paper illustrates the use of Deep Neural Networks (DNN) in radar applications for emitter classification and identification. Since deep learning approaches are capable of accurately classifying complicated patterns in radar signals, they have demonstrated significant promise for identifying radar emitters. By offering a thorough literature analysis of deep learning-based methodologies, the study intends to assist researchers and practitioners in better understanding the application of deep learning techniques to challenges related to the classification and identification of radar emitters. The study demonstrates that DNN can be used successfully in applications for radar classification and identification.   &nbsp

    Adversarial Attack on Radar-based Environment Perception Systems

    Full text link
    Due to their robustness to degraded capturing conditions, radars are widely used for environment perception, which is a critical task in applications like autonomous vehicles. More specifically, Ultra-Wide Band (UWB) radars are particularly efficient for short range settings as they carry rich information on the environment. Recent UWB-based systems rely on Machine Learning (ML) to exploit the rich signature of these sensors. However, ML classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the input to the wrong class. These attacks represent a serious threat to systems integrity, especially for safety-critical applications. In this work, we present a new adversarial attack on UWB radars in which an adversary injects adversarial radio noise in the wireless channel to cause an obstacle recognition failure. First, based on signals collected in real-life environment, we show that conventional attacks fail to generate robust noise under realistic conditions. We propose a-RNA, i.e., Adversarial Radio Noise Attack to overcome these issues. Specifically, a-RNA generates an adversarial noise that is efficient without synchronization between the input signal and the noise. Moreover, a-RNA generated noise is, by-design, robust against pre-processing countermeasures such as filtering-based defenses. Moreover, in addition to the undetectability objective by limiting the noise magnitude budget, a-RNA is also efficient in the presence of sophisticated defenses in the spectral domain by introducing a frequency budget. We believe this work should alert about potentially critical implementations of adversarial attacks on radar systems that should be taken seriously

    Radar intra–pulse signal modulation classification with contrastive learning

    Get PDF
    The existing research on deep learning for radar signal intra–pulse modulation classification is mainly based on supervised leaning techniques, which performance mainly relies on a large number of labeled samples. To overcome this limitation, a self–supervised leaning framework, contrastive learning (CL), combined with the convolutional neural network (CNN) and focal loss function is proposed, called CL––CNN. A two–stage training strategy is adopted by CL–CNN. In the first stage, the model is pretrained using abundant unlabeled time–frequency images, and data augmentation is used to introduce positive–pair and negative–pair samples for self–supervised learning. In the second stage, the pretrained model is fine–tuned for classification, which only uses a small number of labeled time–frequency images. The simulation results demonstrate that CL–CNN outperforms the other deep models and traditional methods in scenarios with Gaussian noise and impulsive noise–affected signals, respectively. In addition, the proposed CL–CNN also shows good generalization ability, i.e., the model pretrained with Gaussian noise–affected samples also performs well on impulsive noise–affected samples

    Smart Sensor Technologies for IoT

    Get PDF
    The recent development in wireless networks and devices has led to novel services that will utilize wireless communication on a new level. Much effort and resources have been dedicated to establishing new communication networks that will support machine-to-machine communication and the Internet of Things (IoT). In these systems, various smart and sensory devices are deployed and connected, enabling large amounts of data to be streamed. Smart services represent new trends in mobile services, i.e., a completely new spectrum of context-aware, personalized, and intelligent services and applications. A variety of existing services utilize information about the position of the user or mobile device. The position of mobile devices is often achieved using the Global Navigation Satellite System (GNSS) chips that are integrated into all modern mobile devices (smartphones). However, GNSS is not always a reliable source of position estimates due to multipath propagation and signal blockage. Moreover, integrating GNSS chips into all devices might have a negative impact on the battery life of future IoT applications. Therefore, alternative solutions to position estimation should be investigated and implemented in IoT applications. This Special Issue, “Smart Sensor Technologies for IoT” aims to report on some of the recent research efforts on this increasingly important topic. The twelve accepted papers in this issue cover various aspects of Smart Sensor Technologies for IoT

    Radio frequency fingerprint identification for Internet of Things: A survey

    Get PDF
    Radio frequency fingerprint (RFF) identification is a promising technique for identifying Internet of Things (IoT) devices. This paper presents a comprehensive survey on RFF identification, which covers various aspects ranging from related definitions to details of each stage in the identification process, namely signal preprocessing, RFF feature extraction, further processing, and RFF identification. Specifically, three main steps of preprocessing are summarized, including carrier frequency offset estimation, noise elimination, and channel cancellation. Besides, three kinds of RFFs are categorized, comprising I/Q signal-based, parameter-based, and transformation-based features. Meanwhile, feature fusion and feature dimension reduction are elaborated as two main further processing methods. Furthermore, a novel framework is established from the perspective of closed set and open set problems, and the related state-of-the-art methodologies are investigated, including approaches based on traditional machine learning, deep learning, and generative models. Additionally, we highlight the challenges faced by RFF identification and point out future research trends in this field

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div
    • …
    corecore