131 research outputs found
Efficient Convolutional Neural Network for FMCW Radar Based Hand Gesture Recognition
FMCW radar could detect object's range, speed and Angleof-Arrival, advantages
are robust to bad weather, good range resolution, and good speed resolution. In
this paper, we consider the FMCW radar as a novel interacting interface on
laptop. We merge sequences of object's range, speed, azimuth information into
single input, then feed to a convolution neural network to learn spatial and
temporal patterns. Our model achieved 96% accuracy on test set and real-time
test.Comment: Poster in Ubicomp 201
Highly-Optimized Radar-Based Gesture Recognition System with Depthwise Expansion Module
The increasing integration of technology in our daily lives demands the development of
more convenient humanâcomputer interaction (HCI) methods. Most of the current hand-based HCI
strategies exhibit various limitations, e.g., sensibility to variable lighting conditions and limitations
on the operating environment. Further, the deployment of such systems is often not performed
in resource-constrained contexts. Inspired by the MobileNetV1 deep learning network, this paper
presents a novel hand gesture recognition system based on frequency-modulated continuous wave
(FMCW) radar, exhibiting a higher recognition accuracy in comparison to the state-of-the-art systems.
First of all, the paper introduces a method to simplify radar preprocessing while preserving the main
information of the performed gestures. Then, a deep neural classifier with the novel Depthwise
Expansion Module based on the depthwise separable convolutions is presented. The introduced
classifier is optimized and deployed on the Coral Edge TPU board. The system defines and adopts
eight different hand gestures performed by five users, offering a classification accuracy of 98.13%
while operating in a low-power and resource-constrained environment.Electronic Components and Systems for European
Leadership Joint Undertaking under grant agreement No. 826655 (Tempo).European Unionâs Horizon 2020 research and innovation programme and
Belgium, France, Germany, Switzerland, and the NetherlandsLodz University of Technology
Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform
In this paper, a real-time signal processing frame-work based on a 60 GHz
frequency-modulated continuous wave (FMCW) radar system to recognize gestures
is proposed. In order to improve the robustness of the radar-based gesture
recognition system, the proposed framework extracts a comprehensive hand
profile, including range, Doppler, azimuth and elevation, over multiple
measurement-cycles and encodes them into a feature cube. Rather than feeding
the range-Doppler spectrum sequence into a deep convolutional neural network
(CNN) connected with recurrent neural networks, the proposed framework takes
the aforementioned feature cube as input of a shallow CNN for gesture
recognition to reduce the computational complexity. In addition, we develop a
hand activity detection (HAD) algorithm to automatize the detection of gestures
in real-time case. The proposed HAD can capture the time-stamp at which a
gesture finishes and feeds the hand profile of all the relevant
measurement-cycles before this time-stamp into the CNN with low latency. Since
the proposed framework is able to detect and classify gestures at limited
computational cost, it could be deployed in an edge-computing platform for
real-time applications, whose performance is notedly inferior to a
state-of-the-art personal computer. The experimental results show that the
proposed framework has the capability of classifying 12 gestures in real-time
with a high F1-score.Comment: Accepted for publication in IEEE Sensors Journal. A video is
available on https://youtu.be/IR5NnZvZBL
Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems
Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300
GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including
security sensing, industrial packaging, medical imaging, and non-destructive
testing. Traditional methods for perception and imaging are challenged by novel
data-driven algorithms that offer improved resolution, localization, and
detection rates. Over the past decade, deep learning technology has garnered
substantial popularity, particularly in perception and computer vision
applications. Whereas conventional signal processing techniques are more easily
generalized to various applications, hybrid approaches where signal processing
and learning-based algorithms are interleaved pose a promising compromise
between performance and generalizability. Furthermore, such hybrid algorithms
improve model training by leveraging the known characteristics of radio
frequency (RF) waveforms, thus yielding more efficiently trained deep learning
algorithms and offering higher performance than conventional methods. This
dissertation introduces novel hybrid-learning algorithms for improved mmWave
imaging systems applicable to a host of problems in perception and sensing.
Various problem spaces are explored, including static and dynamic gesture
classification; precise hand localization for human computer interaction;
high-resolution near-field mmWave imaging using forward synthetic aperture
radar (SAR); SAR under irregular scanning geometries; mmWave image
super-resolution using deep neural network (DNN) and Vision Transformer (ViT)
architectures; and data-level multiband radar fusion using a novel
hybrid-learning architecture. Furthermore, we introduce several novel
approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen
Low Complexity Radar Gesture Recognition Using Synthetic Training Data
Developments in radio detection and ranging (radar) technology have made hand gesture recognition feasible. In heat map-based gesture recognition, feature images have a large size and require complex neural networks to extract information. Machine learning methods typically require large amounts of data and collecting hand gestures with radar is time- and energy-consuming. Therefore, a low computational complexity algorithm for hand gesture recognition based on a frequency-modulated continuous-wave (FMCW) radar and a synthetic hand gesture feature generator are proposed. In the low computational complexity algorithm, two-dimensional Fast Fourier Transform is implemented on the radar raw data to generate a range-Doppler matrix. After that, background modelling is applied to separate the dynamic object and the static background. Then a bin with the highest magnitude in the range-Doppler matrix is selected to locate the target and obtain its range and velocity. The bins at this location along the dimension of the antenna can be utilised to calculate the angle of the target using Fourier beam steering. In the synthetic generator, the Blender software is used to generate different hand gestures and trajectories and then the range, velocity and angle of targets are extracted directly from the trajectory. The experimental results demonstrate that the average recognition accuracy of the model on the test set can reach 89.13% when the synthetic data are used as the training set and the real data are used as the test set. This indicates that the generation of synthetic data can make a meaningful contribution in the pre-training phase
Indoor human activity recognition using high-dimensional sensors and deep neural networks
Many smart home applications rely on indoor human activity recognition. This challenge is currently primarily tackled by employing video camera sensors. However, the use of such sensors is characterized by fundamental technical deficiencies in an indoor environment, often also resulting in a breach of privacy. In contrast, a radar sensor resolves most of these flaws and maintains privacy in particular. In this paper, we investigate a novel approach toward automatic indoor human activity recognition, feeding high-dimensional radar and video camera sensor data into several deep neural networks. Furthermore, we explore the efficacy of sensor fusion to provide a solution in less than ideal circumstances. We validate our approach on two newly constructed and published data sets that consist of 2347 and 1505 samples distributed over six different types of gestures and events, respectively. From our analysis, we can conclude that, when considering a radar sensor, it is optimal to make use of a three-dimensional convolutional neural network that takes as input sequential range-Doppler maps. This model achieves 12.22% and 2.97% error rate on the gestures and the events data set, respectively. A pretrained residual network is employed to deal with the video camera sensor data and obtains 1.67% and 3.00% error rate on the same data sets. We show that there exists a clear benefit in combining both sensors to enable activity recognition in the case of less than ideal circumstances
- âŠ