110 research outputs found

    Fall Detection Using Channel State Information from WiFi Devices

    Get PDF
    Falls among the independently living elderly population are a major public health worry, leading to injuries, loss of confidence to live independently and even to death. Each year, one in three people aged 65 and older falls and one in five of them suffers fatal or non fatal injuries. Therefore, detecting a fall early and alerting caregivers can potentially save lives and increase the standard of living. Existing solutions, e.g. push-button, wearables, cameras, radar, pressure and vibration sensors, have limited public adoption either due to the requirement for wearing the device at all times or installing specialized and expensive infrastructure. In this thesis, a device-free, low cost indoor fall detection system using commodity WiFi devices is presented. The system uses physical layer Channel State Information (CSI) to detect falls. Commercial WiFi hardware is cheap and ubiquitous and CSI provides a wealth of information which helps in maintaining good fall detection accuracy even in challenging environments. The goals of the research in this thesis are the design, implementation and experimentation of a device-free fall detection system using CSI extracted from commercial WiFi devices. To achieve these objectives, the following contributions are made herein. A novel time domain human presence detection scheme is developed as a precursor to detecting falls. As the next contribution, a novel fall detection system is designed and developed. Finally, two main enhancements to the fall detection system are proposed to improve the resilience to changes in operating environment. Experiments were performed to validate system performance in diverse environments. It can be argued that through collection of real world CSI traces, understanding the behavior of CSI during human motion, the development of a signal processing tool-set to facilitate the recognition of falls and validation of the system using real world experiments significantly advances the state of the art by providing a more robust fall detection scheme

    SiMWiSense: Simultaneous Multi-Subject Activity Classification Through Wi-Fi Signals

    Full text link
    Recent advances in Wi-Fi sensing have ushered in a plethora of pervasive applications in home surveillance, remote healthcare, road safety, and home entertainment, among others. Most of the existing works are limited to the activity classification of a single human subject at a given time. Conversely, a more realistic scenario is to achieve simultaneous, multi-subject activity classification. The first key challenge in that context is that the number of classes grows exponentially with the number of subjects and activities. Moreover, it is known that Wi-Fi sensing systems struggle to adapt to new environments and subjects. To address both issues, we propose SiMWiSense, the first framework for simultaneous multi-subject activity classification based on Wi-Fi that generalizes to multiple environments and subjects. We address the scalability issue by using the Channel State Information (CSI) computed from the device positioned closest to the subject. We experimentally prove this intuition by confirming that the best accuracy is experienced when the CSI computed by the transceiver positioned closest to the subject is used for classification. To address the generalization issue, we develop a brand-new few-shot learning algorithm named Feature Reusable Embedding Learning (FREL). Through an extensive data collection campaign in 3 different environments and 3 subjects performing 20 different activities simultaneously, we demonstrate that SiMWiSense achieves classification accuracy of up to 97%, while FREL improves the accuracy by 85% in comparison to a traditional Convolutional Neural Network (CNN) and up to 20% when compared to the state-of-the-art few-shot embedding learning (FSEL), by using only 15 seconds of additional data for each class. For reproducibility purposes, we share our 1TB dataset and code repository.Comment: This work has been accepted for publication in IEEE WoWMoM 202

    The passive operating mode of the linear optical gesture sensor

    Full text link
    The study evaluates the influence of natural light conditions on the effectiveness of the linear optical gesture sensor, working in the presence of ambient light only (passive mode). The orientations of the device in reference to the light source were modified in order to verify the sensitivity of the sensor. A criterion for the differentiation between two states: "possible gesture" and "no gesture" was proposed. Additionally, different light conditions and possible features were investigated, relevant for the decision of switching between the passive and active modes of the device. The criterion was evaluated based on the specificity and sensitivity analysis of the binary ambient light condition classifier. The elaborated classifier predicts ambient light conditions with the accuracy of 85.15%. Understanding the light conditions, the hand pose can be detected. The achieved accuracy of the hand poses classifier trained on the data obtained in the passive mode in favorable light conditions was 98.76%. It was also shown that the passive operating mode of the linear gesture sensor reduces the total energy consumption by 93.34%, resulting in 0.132 mA. It was concluded that optical linear sensor could be efficiently used in various lighting conditions.Comment: 10 pages, 14 figure

    MUSE-Fi: Contactless MUti-person SEnsing Exploiting Near-field Wi-Fi Channel Variation

    Full text link
    Having been studied for more than a decade, Wi-Fi human sensing still faces a major challenge in the presence of multiple persons, simply because the limited bandwidth of Wi-Fi fails to provide a sufficient range resolution to physically separate multiple subjects. Existing solutions mostly avoid this challenge by switching to radars with GHz bandwidth, at the cost of cumbersome deployments. Therefore, could Wi-Fi human sensing handle multiple subjects remains an open question. This paper presents MUSE-Fi, the first Wi-Fi multi-person sensing system with physical separability. The principle behind MUSE-Fi is that, given a Wi-Fi device (e.g., smartphone) very close to a subject, the near-field channel variation caused by the subject significantly overwhelms variations caused by other distant subjects. Consequently, focusing on the channel state information (CSI) carried by the traffic in and out of this device naturally allows for physically separating multiple subjects. Based on this principle, we propose three sensing strategies for MUSE-Fi: i) uplink CSI, ii) downlink CSI, and iii) downlink beamforming feedback, where we specifically tackle signal recovery from sparse (per-user) traffic under realistic multi-user communication scenarios. Our extensive evaluations clearly demonstrate that MUSE-Fi is able to successfully handle multi-person sensing with respect to three typical applications: respiration monitoring, gesture detection, and activity recognition.Comment: 15 pages. Accepted by ACM MobiCom 202

    A CSI-Based Human Activity Recognition Using Deep Learning

    Get PDF
    The Internet of Things (IoT) has become quite popular due to advancements in Information and Communications technologies and has revolutionized the entire research area in Human Activity Recognition (HAR). For the HAR task, vision-based and sensor-based methods can present better data but at the cost of users’ inconvenience and social constraints such as privacy issues. Due to the ubiquity of WiFi devices, the use of WiFi in intelligent daily activity monitoring for elderly persons has gained popularity in modern healthcare applications. Channel State Information (CSI) as one of the characteristics ofWiFi signals, can be utilized to recognize different human activities. We have employed a Raspberry Pi 4 to collect CSI data for seven different human daily activities, and converted CSI data to images and then used these images as inputs of a 2D Convolutional Neural Network (CNN) classifier. Our experiments have shown that the proposed CSI-based HAR outperforms other competitor methods including 1D-CNN, Long Short-Term Memory (LSTM), and Bi-directional LSTM, and achieves an accuracy of around 95% for seven activities
    corecore