391 research outputs found

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    Dynamic Hand Gesture Recognition Using Ultrasonic Sonar Sensors and Deep Learning

    Get PDF
    The space of hand gesture recognition using radar and sonar is dominated mostly by radar applications. In addition, the machine learning algorithms used by these systems are typically based on convolutional neural networks with some applications exploring the use of long short term memory networks. The goal of this study was to build and design a Sonar system that can classify hand gestures using a machine learning approach. Secondly, the study aims to compare convolutional neural networks to long short term memory networks as a means to classify hand gestures using sonar. A Doppler Sonar system was designed and built to be able to sense hand gestures. The Sonar system is a multi-static system containing one transmitter and three receivers. The sonar system can measure the Doppler frequency shifts caused by dynamic hand gestures. Since the system uses three receivers, three different Doppler frequency channels are measured. Three additional differential frequency channels are formed by computing the differences between the frequency of each of the receivers. These six channels are used as inputs to the deep learning models. Two different deep learning algorithms were used to classify the hand gestures; a Doppler biLSTM network [1] and a CNN [2]. Six basic hand gestures, two in each x- y- and z-axis, and two rotational hand gestures are recorded using both left and right hand at different distances. The gestures were also recorded using both left and right hands. Ten-Fold cross-validation is used to evaluate the networks' performance and classification accuracy. The LSTM was able to classify the six basic gestures with an accuracy of at least 96% but with the addition of the two rotational gestures, the accuracy drops to 47%. This result is acceptable since the basic gestures are more commonly used gestures than rotational gestures. The CNN was able to classify all the gestures with an accuracy of at least 98%. Additionally, The LSTM network is also able to classify separate left and right-hand gestures with an accuracy of 80% and The CNN with an accuracy of 83%. The study shows that CNN is the most widely used algorithm for hand gesture recognition as it can consistently classify gestures with various degrees of complexity. The study also shows that the LSTM network can also classify hand gestures with a high degree of accuracy. More experimentation, however, needs to be done in order to increase the complexity of recognisable gestures

    Effect of sparsity-aware time–frequency analysis on dynamic hand gesture classification with radar micro-Doppler signatures

    Get PDF
    Dynamic hand gesture recognition is of great importance in human-computer interaction. In this study, the authors investigate the effect of sparsity-driven time-frequency analysis on hand gesture classification. The time-frequency spectrogram is first obtained by sparsity-driven time-frequency analysis. Then three empirical micro-Doppler features are extracted from the time-frequency spectrogram and a support vector machine is used to classify six kinds of dynamic hand gestures. The experimental results on measured data demonstrate that, compared to traditional time-frequency analysis techniques, sparsity-driven time-frequency analysis provides improved accuracy and robustness in dynamic hand gesture classification

    Highly-Optimized Radar-Based Gesture Recognition System with Depthwise Expansion Module

    Get PDF
    The increasing integration of technology in our daily lives demands the development of more convenient human–computer interaction (HCI) methods. Most of the current hand-based HCI strategies exhibit various limitations, e.g., sensibility to variable lighting conditions and limitations on the operating environment. Further, the deployment of such systems is often not performed in resource-constrained contexts. Inspired by the MobileNetV1 deep learning network, this paper presents a novel hand gesture recognition system based on frequency-modulated continuous wave (FMCW) radar, exhibiting a higher recognition accuracy in comparison to the state-of-the-art systems. First of all, the paper introduces a method to simplify radar preprocessing while preserving the main information of the performed gestures. Then, a deep neural classifier with the novel Depthwise Expansion Module based on the depthwise separable convolutions is presented. The introduced classifier is optimized and deployed on the Coral Edge TPU board. The system defines and adopts eight different hand gestures performed by five users, offering a classification accuracy of 98.13% while operating in a low-power and resource-constrained environment.Electronic Components and Systems for European Leadership Joint Undertaking under grant agreement No. 826655 (Tempo).European Union’s Horizon 2020 research and innovation programme and Belgium, France, Germany, Switzerland, and the NetherlandsLodz University of Technology

    Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform

    Full text link
    In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score.Comment: Accepted for publication in IEEE Sensors Journal. A video is available on https://youtu.be/IR5NnZvZBL

    Dynamic Hand Gesture Classification Based on Multistatic Radar Micro-Doppler Signatures Using Convolutional Neural Network

    Get PDF
    We propose a novel convolutional neural network (CNN) for dynamic hand gesture classification based on multistatic radar micro-Doppler signatures. The timefrequency spectrograms of micro-Doppler signatures at all the receiver antennas are adopted as the input to CNN, where data fusion of different receivers is carried out at an adjustable position. The optimal fusion position that achieves the highest classification accuracy is determined by a series of experiments. Experimental results on measured data show that 1) the accuracy of classification using multistatic radar is significantly higher than monostatic radar, and that 2) fusion at the middle of CNN achieves the best classification accuracy
    • …
    corecore