835 research outputs found

    GaitFi: Robust Device-Free Human Identification via WiFi and Vision Multimodal Learning

    Get PDF

    SenseFi: A library and benchmark on deep-learning-empowered WiFi human sensing

    Get PDF
    Over the recent years, WiFi sensing has been rapidly developed for privacy-preserving, ubiquitous human-sensing applications, enabled by signal processing and deep-learning methods. However, a comprehensive public benchmark for deep learning in WiFi sensing, similar to that available for visual recognition, does not yet exist. In this article, we review recent progress in topics ranging from WiFi hardware platforms to sensing algorithms and propose a new library with a comprehensive benchmark, SenseFi. On this basis, we evaluate various deep-learning models in terms of distinct sensing tasks, WiFi platforms, recognition accuracy, model size, computational complexity, and feature transferability. Extensive experiments are performed whose results provide valuable insights into model design, learning strategy, and training techniques for real-world applications. In summary, SenseFi is a comprehensive benchmark with an open-source library for deep learning in WiFi sensing research that offers researchers a convenient tool to validate learning-based WiFi-sensing methods on multiple datasets and platforms.Nanyang Technological UniversityPublished versionThis research is supported by NTU Presidential Postdoctoral Fellowship, ‘‘Adaptive Multi-modal Learning for Robust Sensing and Recognition in Smart Cities’’ project fund (020977-00001), at the Nanyang Technological University, Singapore

    A Wi-Fi Signal-Based Human Activity Recognition Using High-Dimensional Factor Models

    Full text link
    Passive sensing techniques based on Wi-Fi signals have emerged as a promising technology in advanced wireless communication systems due to their widespread application and cost-effectiveness. However, the proliferation of low-cost Internet of Things (IoT) devices has led to dense network deployments, resulting in increased levels of noise and interference in Wi-Fi environments. This, in turn, leads to noisy and redundant Channel State Information (CSI) data. As a consequence, the accuracy of human activity recognition based on Wi-Fi signals is compromised. To address this issue, we propose a novel CSI data signal extraction method. We established a human activity recognition system based on the Intel 5300 network interface cards (NICs) and collected a dataset containing six categories of human activities. Using our approach, signals extracted from the CSI data serve as inputs to machine learning (ML) classification algorithms to evaluate classification performance. In comparison to ML methods based on Principal Component Analysis (PCA), our proposed High-Dimensional Factor Model (HDFM) method improves recognition accuracy by 6.8%

    MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing

    Full text link
    4D human perception plays an essential role in a myriad of applications, such as home automation and metaverse avatar simulation. However, existing solutions which mainly rely on cameras and wearable devices are either privacy intrusive or inconvenient to use. To address these issues, wireless sensing has emerged as a promising alternative, leveraging LiDAR, mmWave radar, and WiFi signals for device-free human sensing. In this paper, we propose MM-Fi, the first multi-modal non-intrusive 4D human dataset with 27 daily or rehabilitation action categories, to bridge the gap between wireless sensing and high-level human perception tasks. MM-Fi consists of over 320k synchronized frames of five modalities from 40 human subjects. Various annotations are provided to support potential sensing tasks, e.g., human pose estimation and action recognition. Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks. We envision that MM-Fi can contribute to wireless sensing research with respect to action recognition, human pose estimation, multi-modal learning, cross-modal supervision, and interdisciplinary healthcare research.Comment: The paper has been accepted by NeurIPS 2023 Datasets and Benchmarks Track. Project page: https://ntu-aiot-lab.github.io/mm-f

    A Fast Deep Learning Technique for Wi-Fi-Based Human Activity Recognition

    Get PDF
    Despite recent advances, fast and reliable Human Activity Recognition in confined space is still an open problem related to many real-world applications, especially in health and biomedical monitoring. With the ubiquitous presence of Wi-Fi networks, the activity recognition and classification problems can be solved by leveraging some characteristics of the Channel State Information of the 802.11 standard. Given the well-documented advantages of Deep Learning algorithms in solving complex pattern recognition problems, many solutions in Human Activity Recognition domain are taking advantage of those models. To improve the time and precision of activity classification of time-series data stemming from Channel State Information, we propose herein a fast deep neural model encompassing concepts not only from state-of-the-art recurrent neural networks, but also using convolutional operators with added randomization. Results from real data in an experimental environment show promising results

    Attention-Enhanced Deep Learning for Device-Free Through-the-Wall Presence Detection Using Indoor WiFi System

    Full text link
    Accurate detection of human presence in indoor environments is important for various applications, such as energy management and security. In this paper, we propose a novel system for human presence detection using the channel state information (CSI) of WiFi signals. Our system named attention-enhanced deep learning for presence detection (ALPD) employs an attention mechanism to automatically select informative subcarriers from the CSI data and a bidirectional long short-term memory (LSTM) network to capture temporal dependencies in CSI. Additionally, we utilize a static feature to improve the accuracy of human presence detection in static states. We evaluate the proposed ALPD system by deploying a pair of WiFi access points (APs) for collecting CSI dataset, which is further compared with several benchmarks. The results demonstrate that our ALPD system outperforms the benchmarks in terms of accuracy, especially in the presence of interference. Moreover, bidirectional transmission data is beneficial to training improving stability and accuracy, as well as reducing the costs of data collection for training. Overall, our proposed ALPD system shows promising results for human presence detection using WiFi CSI signals

    Enhancing CSI-Based Human Activity Recognition by Edge Detection Techniques

    Get PDF
    Human Activity Recognition (HAR) has been a popular area of research in the Internet of Things (IoT) and Human–Computer Interaction (HCI) over the past decade. The objective of this field is to detect human activities through numeric or visual representations, and its applications include smart homes and buildings, action prediction, crowd counting, patient rehabilitation, and elderly monitoring. Traditionally, HAR has been performed through vision-based, sensor-based, or radar-based approaches. However, vision-based and sensor-based methods can be intrusive and raise privacy concerns, while radar-based methods require special hardware, making them more expensive. WiFi-based HAR is a cost-effective alternative, where WiFi access points serve as transmitters and users’ smartphones serve as receivers. The HAR in this method is mainly performed using two wireless-channel metrics: Received Signal Strength Indicator (RSSI) and Channel State Information (CSI). CSI provides more stable and comprehensive information about the channel compared to RSSI. In this research, we used a convolutional neural network (CNN) as a classifier and applied edge-detection techniques as a preprocessing phase to improve the quality of activity detection. We used CSI data converted into RGB images and tested our methodology on three available CSI datasets. The results showed that the proposed method achieved better accuracy and faster training times than the simple RGB-represented data. In order to justify the effectiveness of our approach, we repeated the experiment by applying raw CSI data to long short-term memory (LSTM) and Bidirectional LSTM classifiers
    • …
    corecore