11 research outputs found

    Sparsity-Driven Micro-Doppler Feature Extraction for Dynamic Hand Gesture Recognition

    Get PDF
    In this paper, a sparsity-driven method of micro-Doppler analysis is proposed for dynamic hand gesture recognition with radar sensors. First, sparse representations of the echoes reflected from dynamic hand gestures are achieved through the Gaussian-windowed Fourier dictionary. Second, the micro-Doppler features of dynamic hand gestures are extracted using the orthogonal matching pursuit algorithm. Finally, the nearest neighbor classifier is combined with the modified Hausdorff distance to recognize dynamic hand gestures based on the sparse micro-Doppler features. Experiments with real radar data show that the recognition accuracy produced by the proposed method exceeds 96% under moderate noise, and the proposed method outperforms the approaches based on principal component analysis and deep convolutional neural network with small training dataset

    Personnel recognition and gait classification based on multistatic micro-doppler signatures using deep convolutional neural networks

    Get PDF
    In this letter, we propose two methods for personnel recognition and gait classification using deep convolutional neural networks (DCNNs) based on multistatic radar micro-Doppler signatures. Previous DCNN-based schemes have mainly focused on monostatic scenarios, whereas directional diversity offered by multistatic radar is exploited in this letter to improve classification accuracy. We first propose the voted monostatic DCNN (VMo-DCNN) method, which trains DCNNs on each receiver node separately and fuses the results by binary voting. By merging the fusion step into the network architecture, we further propose the multistatic DCNN (Mul-DCNN) method, which performs slightly better than VMo-DCNN. These methods are validated on real data measured with a 2.4-GHz multistatic radar system. Experimental results show that the Mul-DCNN achieves over 99% accuracy in armed/unarmed gait classification using only 20% training data and similar performance in two-class personnel recognition using 50% training data, which are higher than the accuracy obtained by performing DCNN on a single radar node

    Dynamic Hand Gesture Classification Based on Multistatic Radar Micro-Doppler Signatures Using Convolutional Neural Network

    Get PDF
    We propose a novel convolutional neural network (CNN) for dynamic hand gesture classification based on multistatic radar micro-Doppler signatures. The timefrequency spectrograms of micro-Doppler signatures at all the receiver antennas are adopted as the input to CNN, where data fusion of different receivers is carried out at an adjustable position. The optimal fusion position that achieves the highest classification accuracy is determined by a series of experiments. Experimental results on measured data show that 1) the accuracy of classification using multistatic radar is significantly higher than monostatic radar, and that 2) fusion at the middle of CNN achieves the best classification accuracy

    Arm Motion Classification Using Curve Matching of Maximum Instantaneous Doppler Frequency Signatures

    Full text link
    Hand and arm gesture recognition using the radio frequency (RF) sensing modality proves valuable in manmachine interface and smart environment. In this paper, we use curve matching techniques for measuring the similarity of the maximum instantaneous Doppler frequencies corresponding to different arm gestures. In particular, we apply both Frechet and dynamic time warping (DTW) distances that, unlike the Euclidean (L2) and Manhattan (L1) distances, take into account both the location and the order of the points for rendering two curves similar or dissimilar. It is shown that improved arm gesture classification can be achieved by using the DTW method, in lieu of L2 and L1 distances, under the nearest neighbor (NN) classifier.Comment: 6 pages, 7 figures, 2020 IEEE radar conference. arXiv admin note: substantial text overlap with arXiv:1910.1117

    SimHumalator: An Open Source End-to-End Radar Simulator For Human Activity Recognition

    Get PDF
    Radio-frequency based non-cooperative monitor ing of humans has numerous applications ranging from law enforcement to ubiquitous sensing applications such as ambient assisted living and bio-medical applications for non-intrusively monitoring patients. Large training datasets, almost unlimited memory capacity, and ever- increasing processing speeds of computers could drive forward the data- driven deep-learning focused research in the above applications. However, generating and labeling large volumes of high-quality, diverse radar datasets is an onerous task. Furthermore, unlike the fields of vision and image processing, the radar community has limited access to databases that contain large volumes of experimental data. Therefore, in this article, we present an open-source motion capture data-driven simulation tool, SimHumalator, that can generate large volumes of human micro-Doppler radar data in passive WiFi scenarios. The simulator integrates IEEE 802.11 WiFi standard(IEEE 802.11g, n, and ad) compliant transmissions with the human animation data to generate the micro-Doppler features that incorporate the diversity of human motion characteristics and the sensor parameters. The simulated signatures have been validated with experimental data gathered using an in-house-built hardware prototype. This article describes simulation methodology in detail and provides case studies on the feasibility of using simulated micro-Doppler spectrograms for data augmentation tasks

    Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform

    Full text link
    In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score.Comment: Accepted for publication in IEEE Sensors Journal. A video is available on https://youtu.be/IR5NnZvZBL
    corecore