7,228 research outputs found
Higher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devices
Device-free human gesture recognition (HGR) using commercial o the shelf (COTS) Wi-Fi
devices has gained attention with recent advances in wireless technology. HGR recognizes the human
activity performed, by capturing the reflections ofWi-Fi signals from moving humans and storing
them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction
and transformation to pre-process the raw CSI traces. However, these methods fail to capture
the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal
representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts
higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the
recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order
cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods
derived from information theory construct a robust and highly informative feature subset, fed as
input to the multilevel support vector machine (SVM) classifier in order to measure the performance.
The proposed methodology is validated using a public database SignFi, consisting of 276 gestures
with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home
environment using a 10 5 cross-validation. HOS-Re achieved an average recognition accuracy of
97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average
recognition accuracy for 150 sign gestures with 7500 instances, collected from five di erent users was
96.23% in the laboratory environment.Taylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmeinfo:eu-repo/semantics/publishedVersio
GUARDIANS final report
Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a
large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we
discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with
the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings
Contactless WiFi Sensing and Monitoring for Future Healthcare:Emerging Trends, Challenges and Opportunities
WiFi sensing has recently received significant interest from academics, industry, healthcare professionals and other caregivers (including family members) as a potential mechanism to monitor our aging population at distance, without deploying devices on users bodies. In particular, these methods have gained significant interest to efficiently detect critical events such as falls, sleep disturbances, wandering behavior, respiratory disorders, and abnormal cardiac activity experienced by vulnerable people. The interest in such WiFi-based sensing systems stems from its practical deployments in indoor settings and compliance from monitored persons, unlike other sensors such as wearables, camera-based, and acoustic-based solutions. This paper reviews state-of-the-art research on collecting and analysing channel state information, extracted using ubiquitous WiFi signals, describing a range of healthcare applications and identifying a series of open research challenges, untapped areas, and related trends.This work aims to provide an overarching view in understanding the technology and discusses its uses-cases from a perspective that considers hardware, advanced signal processing, and data acquisition
Radar and RGB-depth sensors for fall detection: a review
This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing
Sign language gesture recognition with bispectrum features using SVM
Wi-Fi based sensing system captures the signal reflections due to human gestures as Channel State Information (CSI) values in subcarrier level for accurately predicting the fine-grained gestures. The proposed work explores the Higher Order Statistical (HOS) method by deriving bispectram features (BF) from raw signal by adopting a Conditional Informative Feature Extraction (CIFE) technique from information theory to form a subset of informative and best features. Support Vector Machine (SVM) classifier is adopted in the present work for classifying the gesture and to measure the prediction accuracy. The present work is validated on a secondary dataset, SignFi, having data collected from two different environments with varying number of users and sign gestures. SVM reports an overall accuracy of 83.8%, 94.1%, 74.9% and 75.6% in different environments/scenarios.Taylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmeinfo:eu-repo/semantics/publishedVersio
- …