3 research outputs found

    Indoor human activity recognition using high-dimensional sensors and deep neural networks

    Get PDF
    Many smart home applications rely on indoor human activity recognition. This challenge is currently primarily tackled by employing video camera sensors. However, the use of such sensors is characterized by fundamental technical deficiencies in an indoor environment, often also resulting in a breach of privacy. In contrast, a radar sensor resolves most of these flaws and maintains privacy in particular. In this paper, we investigate a novel approach toward automatic indoor human activity recognition, feeding high-dimensional radar and video camera sensor data into several deep neural networks. Furthermore, we explore the efficacy of sensor fusion to provide a solution in less than ideal circumstances. We validate our approach on two newly constructed and published data sets that consist of 2347 and 1505 samples distributed over six different types of gestures and events, respectively. From our analysis, we can conclude that, when considering a radar sensor, it is optimal to make use of a three-dimensional convolutional neural network that takes as input sequential range-Doppler maps. This model achieves 12.22% and 2.97% error rate on the gestures and the events data set, respectively. A pretrained residual network is employed to deal with the video camera sensor data and obtains 1.67% and 3.00% error rate on the same data sets. We show that there exists a clear benefit in combining both sensors to enable activity recognition in the case of less than ideal circumstances

    Multi-Person Continuous Tracking and Identification from mm-Wave micro-Doppler Signatures

    Full text link
    In this work, we investigate the use of backscattered mm-wave radio signals for the joint tracking and recognition of identities of humans as they move within indoor environments. We build a system that effectively works with multiple persons concurrently sharing and freely moving within the same indoor space. This leads to a complicated setting, which requires one to deal with the randomness and complexity of the resulting (composite) backscattered signal. The proposed system combines several processing steps: at first, the signal is filtered to remove artifacts, reflections and random noise that do not originate from humans. Hence, a density-based classification algorithm is executed to separate the Doppler signatures of different users. The final blocks are trajectory tracking and user identification, respectively based on Kalman filters and deep neural networks. Our results demonstrate that the integration of the last-mentioned processing stages is critical towards achieving robustness and accuracy in multi-user settings. Our technique is tested both on a single-target public dataset, for which it outperforms state-of-the-art methods, and on our own measurements, obtained with a 77 GHz radar on multiple subjects simultaneously moving in two different indoor environments. The system works in an online fashion, permitting the continuous identification of multiple subjects with accuracies up to 98%, e.g., with four subjects sharing the same physical space, and with a small accuracy reduction when tested with unseen data from a challenging real-life scenario that was not part of the model learning phase.Comment: 16 pages, 12 figures, 5 table
    corecore