9 research outputs found

    Fall Detection Using Channel State Information from WiFi Devices

    Get PDF
    Falls among the independently living elderly population are a major public health worry, leading to injuries, loss of confidence to live independently and even to death. Each year, one in three people aged 65 and older falls and one in five of them suffers fatal or non fatal injuries. Therefore, detecting a fall early and alerting caregivers can potentially save lives and increase the standard of living. Existing solutions, e.g. push-button, wearables, cameras, radar, pressure and vibration sensors, have limited public adoption either due to the requirement for wearing the device at all times or installing specialized and expensive infrastructure. In this thesis, a device-free, low cost indoor fall detection system using commodity WiFi devices is presented. The system uses physical layer Channel State Information (CSI) to detect falls. Commercial WiFi hardware is cheap and ubiquitous and CSI provides a wealth of information which helps in maintaining good fall detection accuracy even in challenging environments. The goals of the research in this thesis are the design, implementation and experimentation of a device-free fall detection system using CSI extracted from commercial WiFi devices. To achieve these objectives, the following contributions are made herein. A novel time domain human presence detection scheme is developed as a precursor to detecting falls. As the next contribution, a novel fall detection system is designed and developed. Finally, two main enhancements to the fall detection system are proposed to improve the resilience to changes in operating environment. Experiments were performed to validate system performance in diverse environments. It can be argued that through collection of real world CSI traces, understanding the behavior of CSI during human motion, the development of a signal processing tool-set to facilitate the recognition of falls and validation of the system using real world experiments significantly advances the state of the art by providing a more robust fall detection scheme

    Extracting Human Context Through Receiver-End Beamforming

    No full text
    Device-free passive sensing of the human targets using wireless signals have acquired much attention in the recent past because of its importance in many applications including security, heating, ventilation and air conditioning, activity recognition, and elderly care. In this paper, we use receiver-side beamforming to isolate the array response of a human target when the line of sight array response is several magnitudes stronger than the human response. The solution is implemented in a 5G testbed using a software-defined radio (SDR) platform. As beamforming with SDRs faces the challenge to train the beamformer to different azimuth angles, we present an algorithm to generate the steering vectors for all azimuth angles from a few training directions amidst imprecise prior information on the training steering vectors. We extract the direction of arrival (DoA) from the array response of the human target, and conducting experiments in a semi-anechoic chamber, we detect the DoAs of up to four stationary human targets and track the DoA of up to two walking persons simultaneously.Peer reviewe

    Beamsteering for Training-free Counting of Multiple Humans Performing Distinct Activities

    No full text
    Recognition of the context of humans plays an important role in pervasive applications such as intrusion detection, human density estimation for heating, ventilation and air-conditioning in smart buildings, as well as safety guarantee for workers during human-robot interaction. Radio vision is able to provide these sensing capabilities with low privacy intrusion. A common challenge though, for current radio sensing solutions is to distinguish simultaneous movement from multiple subjects. We present an approach that exploits antenna installations, for instance, found in upcoming 5G technology, to detect and extract activities from spatially scattered human targets in an ad-hoc manner in arbitrary environments and without prior training of the multi-subject detection. We perform receiver-side beamforming and beam-sweeping over different azimuth angles to detect human presence in those regions separately. We characterize the resultant fluctuations in the spatial streams due to human influence using a case study and make the traces publicly available. We demonstrate the potential of this approach through two applications: 1) By feeding the similarities of the resulting spatial streams into a clustering algorithm, we count the humans in a given area without prior training. (up to 6 people in a 22.4 m2 area with an accuracy that significantly exceeds the related work). 2) We demonstrate that simultaneously conducted activities and gestures can be extracted from the spatial streams through blind source separation.Peer reviewe

    Capturing Human-Machine Interaction Events from Radio Sensors in Industry 4.0 Environments

    No full text
    In manufacturing environments, human workers interact with increasingly autonomous machinery. To ensure workspace safety and production efficiency during human-robot cooperation, continuous and accurate tracking and perception of workers’ activities is required. The RadioSense project intends to move forward the state-of-the-art in advanced sensing and perception for next generation manufacturing workspace. In this paper, we describe our ongoing efforts towards multi-subject recognition cases with multiple persons conducting several simultaneous activities. Perturbations induced by moving bodies/objects on the electromagnetic wavefield can be processed for environmental perception by leveraging next generation (5G) New Radio (NR) technologies, including MIMO systems, high performance edge-cloud computing and novel (or custom designed) deep learning tools.Peer reviewe

    Motion pattern recognition in 4D point clouds

    No full text
    We address an actively discussed problem in signal processing, recognizing patterns from spatial data in motion. In particular, we suggest a neural network architecture to recognize motion patterns from 4D point clouds. We demonstrate the feasibility of our approach with point cloud datasets of hand gestures. The architecture, PointGest, directly feeds on unprocessed timelines of point cloud data without any need for voxelization or projection. The model is resilient to noise in the input point cloud through abstraction to lower-density representations, especially for regions of high density. We evaluate the architecture on a benchmark dataset with ten gestures. PointGest achieves an accuracy of 98.8%, outperforming five state-of-the-art point cloud classification models.Peer reviewe

    3D Head Motion Detection Using Millimeter-Wave Doppler Radar

    No full text
    In advanced driver assistance systems to conditional automation systems, monitoring of driver state is vital for predicting the driver's capacity to supervise or maneuver the vehicle in cases of unexpected road events and to facilitate better in-car services. The paper presents a technique that exploits millimeter-wave Doppler radar for 3D head tracking. Identifying the bistatic and monostatic geometry for antennas to detect rotational vs. translational movements, the authors propose the biscattering angle for computing a distinctive feature set to isolate dynamic movements via class memberships. Through data reduction and joint time-frequency analysis, movement boundaries are marked for creation of a simplified, uncorrelated, and highly separable feature set. The authors report movement-prediction accuracy of 92%. This non-invasive and simplified head tracking has the potential to enhance monitoring of driver state in autonomous vehicles and aid intelligent car assistants in guaranteeing seamless and safe journeys.Peer reviewe

    Tesla-Rapture

    No full text
    Publisher Copyright: IEEE | openaire: EC/H2020/813999/EU//WINDMILLWe present Tesla-Rapture, a gesture recognition system for sparse point clouds generated by mmWave Radars. State of the art gesture recognition models are either too resource consuming or not sufficiently accurate for the integration into real-life scenarios using wearable or constrained equipment such as IoT devices (e.g. Raspberry PI), XR hardware (e.g. HoloLens), or smart-phones. To tackle this issue, we have developed Tesla, a Message Passing Neural Network (MPNN) graph convolution approach for mmWave radar point clouds. The model outperforms the state of the art on three datasets in terms of accuracy while reducing the computational complexity and, hence, the execution time. In particular, the approach, is able to predict a gesture almost 8 times faster than the most accurate competitor. Our performance evaluation in different scenarios (environments, angles, distances) shows that Tesla generalizes well and improves the accuracy up to 20% in challenging scenarios, such as a through-wall setting and sensing at extreme angles. Utilizing Tesla, we develop Tesla-Rapture, a real-time implementation using a mmWave Radar on a Raspberry PI 4 and evaluate its accuracy and time-complexity. We also publish the source code, the trained models, and the implementation of the model for embedded devices.Peer reviewe
    corecore