277 research outputs found

    Deep Learning Techniques in Radar Emitter Identification

    Get PDF
    In the field of electronic warfare (EW), one of the crucial roles of electronic intelligence is the identification of radar signals. In an operational environment, it is very essential to identify radar emitters whether friend or foe so that appropriate radar countermeasures can be taken against them. With the electromagnetic environment becoming increasingly complex and the diversity of signal features, radar emitter identification with high recognition accuracy has become a significantly challenging task. Traditional radar identification methods have shown some limitations in this complex electromagnetic scenario. Several radar classification and identification methods based on artificial neural networks have emerged with the emergence of artificial neural networks, notably deep learning approaches. Machine learning and deep learning algorithms are now frequently utilized to extract various types of information from radar signals more accurately and robustly. This paper illustrates the use of Deep Neural Networks (DNN) in radar applications for emitter classification and identification. Since deep learning approaches are capable of accurately classifying complicated patterns in radar signals, they have demonstrated significant promise for identifying radar emitters. By offering a thorough literature analysis of deep learning-based methodologies, the study intends to assist researchers and practitioners in better understanding the application of deep learning techniques to challenges related to the classification and identification of radar emitters. The study demonstrates that DNN can be used successfully in applications for radar classification and identification.   &nbsp

    Signal Processing Research Program

    Get PDF
    Contains table of contents for Part III, table of contents for Section 1, an introduction and reports on fourteen research projects.Charles S. Draper Laboratory Contract DL-H-404158U.S. Navy - Office of Naval Research Grant N00014-89-J-1489National Science Foundation Grant MIP 87-14969Battelle LaboratoriesTel-Aviv University, Department of Electronic SystemsU.S. Army Research Office Contract DAAL03-86-D-0001The Federative Republic of Brazil ScholarshipSanders Associates, Inc.Bell Northern Research, Ltd.Amoco Foundation FellowshipGeneral Electric FellowshipNational Science Foundation FellowshipU.S. Air Force - Office of Scientific Research FellowshipU.S. Navy - Office of Naval Research Grant N00014-85-K-0272Natural Science and Engineering Research Council of Canada - Science and Technology Scholarshi

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    One-stage blind source separation via a sparse autoencoder framework

    Get PDF
    Blind source separation (BSS) is the process of recovering individual source transmissions from a received mixture of co-channel signals without a priori knowledge of the channel mixing matrix or transmitted source signals. The received co-channel composite signal is considered to be captured across an antenna array or sensor network and is assumed to contain sparse transmissions, as users are active and inactive aperiodically over time. An unsupervised machine learning approach using an artificial feedforward neural network sparse autoencoder with one hidden layer is formulated for blindly recovering the channel matrix and source activity of co-channel transmissions. The BSS sparse autoencoder provides one-stage learning using the receive signal data only, which solves for the channel matrix and signal sources simultaneously. The recovered co-channel source signals are produced at the encoded output of the sparse autoencoder hidden layer. A complex-valued soft-threshold operator is used as the activation function at the hidden layer to preserve the ordered pairs of real and imaginary components. Once the weights of the sparse autoencoder are learned, the latent signals are recovered at the hidden layer without requiring any additional optimization steps. The generalization performance on future received data demonstrates the ability to recover signal transmissions on untrained data and outperform the two-stage BSS process

    Radar-Based Human Motion Recognition by Using Vital Signs with ECA-CNN

    Get PDF
    Radar technologies reserve a large latent capacity in dealing with human motion recognition (HMR). For the problem that it is challenging to quickly and accurately classify various complex motions, an HMR algorithm combing the attention mechanism and convolution neural network (ECA-CNN) using vital signs is proposed. Firstly, the original radar signal is obtained from human chest wall displacement. Chirp-Z Transform (CZT) algorithm is adopted to refine and amplify the narrow band spectrum region of interest in the global spectrum of the signal, and accurate information on the specific band is extracted. Secondly, six time-domain features were extracted for the neural network. Finally, an ECA-CNN is designed to improve classification accuracy, with a small size, fast speed, and high accuracy of 98%. This method can improve the classification accuracy and efficiency of the network to a large extent. Besides, the size of this network is 100 kb, which is convenient to integrate into the embedded devices
    • …
    corecore