335 research outputs found

    Mini-Batch Alignment: A Deep-Learning Model for Domain Factor-Independent Feature Extraction for Wi-Fi–CSI Data

    Get PDF
    Unobtrusive sensing (device-free sensing) aims to embed sensing into our daily lives. This is achievable by re-purposing communication technologies already used in our environments. Wireless Fidelity (Wi-Fi) sensing, using Channel State Information (CSI) measurement data, seems to be a perfect fit for this purpose since Wi-Fi networks are already omnipresent. However, a big challenge in this regard is CSI data being sensitive to ‘domain factors’ such as the position and orientation of a subject performing an activity or gesture. Due to these factors, CSI signal disturbances vary, causing domain shifts. Shifts lead to the lack of inference generalization, i.e., the model does not always perform well on unseen data during testing. We present a domain factor-independent feature-extraction pipeline called ‘mini-batch alignment’. Mini-batch alignment steers a feature-extraction model’s training process such that it is unable to separate intermediate feature-probability density functions of input data batches seen previously from the current input data batch. By means of this steering technique, we hypothesize that mini-batch alignment (i) absolves the need for providing a domain label, (ii) reduces pipeline re-building and re-training likelihood when encountering latent domain factors, and (iii) absolves the need for extra model storage and training time. We test this hypothesis via a vast number of performance-evaluation experiments. The experiments involve both one- and two-domain-factor leave-out cross-validation, two open-source gesture-recognition datasets called SignFi and Widar3, two pre-processed input types called Doppler Frequency Spectrum (DFS) and Gramian Angular Difference Field (GADF), and several existing domain-shift mitigation techniques. We show that mini-batch alignment performs on a par with other domain-shift mitigation techniques in both position and orientation one-domain leave-out cross-validation using the Widar3 dataset and DFS as input type. When considering a memory-complexity-reduced version of the GADF as input type, mini-batch alignment shows hints of recuperating performance regarding a standard baseline model to the extent that no additional performance due to weight steering is lost in both one-domain-factor leave-out and two-orientation-domain-factor leave-out cross-validation scenarios. However, this is not enough evidence that the mini-batch alignment hypothesis is valid. We identified pitfalls leading up to the hypothesis invalidation: (i) lack of good-quality benchmark datasets, (ii) invalid probability distribution assumptions, and (iii) non-linear distribution scaling issues

    Mini-Batch Alignment: A Deep-Learning Model for Domain Factor-Independent Feature Extraction for Wi-Fi–CSI Data

    Get PDF
    Unobtrusive sensing (device-free sensing) aims to embed sensing into our daily lives. This is achievable by re-purposing communication technologies already used in our environments. Wireless Fidelity (Wi-Fi) sensing, using Channel State Information (CSI) measurement data, seems to be a perfect fit for this purpose since Wi-Fi networks are already omnipresent. However, a big challenge in this regard is CSI data being sensitive to ‘domain factors’ such as the position and orientation of a subject performing an activity or gesture. Due to these factors, CSI signal disturbances vary, causing domain shifts. Shifts lead to the lack of inference generalization, i.e., the model does not always perform well on unseen data during testing. We present a domain factor-independent feature-extraction pipeline called ‘mini-batch alignment’. Mini-batch alignment steers a feature-extraction model’s training process such that it is unable to separate intermediate feature-probability density functions of input data batches seen previously from the current input data batch. By means of this steering technique, we hypothesize that mini-batch alignment (i) absolves the need for providing a domain label, (ii) reduces pipeline re-building and re-training likelihood when encountering latent domain factors, and (iii) absolves the need for extra model storage and training time. We test this hypothesis via a vast number of performance-evaluation experiments. The experiments involve both one- and two-domain-factor leave-out cross-validation, two open-source gesture-recognition datasets called SignFi and Widar3, two pre-processed input types called Doppler Frequency Spectrum (DFS) and Gramian Angular Difference Field (GADF), and several existing domain-shift mitigation techniques. We show that mini-batch alignment performs on a par with other domain-shift mitigation techniques in both position and orientation one-domain leave-out cross-validation using the Widar3 dataset and DFS as input type. When considering a memory-complexity-reduced version of the GADF as input type, mini-batch alignment shows hints of recuperating performance regarding a standard baseline model to the extent that no additional performance due to weight steering is lost in both one-domain-factor leave-out and two-orientation-domain-factor leave-out cross-validation scenarios. However, this is not enough evidence that the mini-batch alignment hypothesis is valid. We identified pitfalls leading up to the hypothesis invalidation: (i) lack of good-quality benchmark datasets, (ii) invalid probability distribution assumptions, and (iii) non-linear distribution scaling issues

    Transfer: Cross Modality Knowledge Transfer using Adversarial Networks -- A Study on Gesture Recognition

    Full text link
    Knowledge transfer across sensing technology is a novel concept that has been recently explored in many application domains, including gesture-based human computer interaction. The main aim is to gather semantic or data driven information from a source technology to classify / recognize instances of unseen classes in the target technology. The primary challenge is the significant difference in dimensionality and distribution of feature sets between the source and the target technologies. In this paper, we propose TRANSFER, a generic framework for knowledge transfer between a source and a target technology. TRANSFER uses a language-based representation of a hand gesture, which captures a temporal combination of concepts such as handshape, location, and movement that are semantically related to the meaning of a word. By utilizing a pre-specified syntactic structure and tokenizer, TRANSFER segments a hand gesture into tokens and identifies individual components using a token recognizer. The tokenizer in this language-based recognition system abstracts the low-level technology-specific characteristics to the machine interface, enabling the design of a discriminator that learns technology-invariant features essential for recognition of gestures in both source and target technologies. We demonstrate the usage of TRANSFER for three different scenarios: a) transferring knowledge across technology by learning gesture models from video and recognizing gestures using WiFi, b) transferring knowledge from video to accelerometer, and d) transferring knowledge from accelerometer to WiFi signals

    WiMorse: a contactless Morse code text input system using ambient WiFi signals

    Get PDF
    International audienceRecent years have witnessed advances of Internet of Things (IoT) technologies and their applications to enable contactless sensing and human-computer interaction in smart homes. For people with Motor Neurone Disease (MND), their motion capabilities are severely impaired and they have difficulties interacting with IoT devices and even communicating with other people. As the disease progresses, most patients lose their speech function eventually which makes the widely adopted voice-based solutions fail. In contrast, most patients can still move their fingers slightly even after they have lost the control of their arms and hands. Thus we propose to develop a Morse code based text input system, called WiMorse, which allows patients with minimal single-finger control to input and communicate with other people without attaching any sensor to their fingers. WiMorse leverages ubiquitous commodity WiFi devices to track subtle finger movements contactlessly and encode them as Morse code input. In order to sense the very subtle finger movements, we propose to employ the ratio of the Channel State Information (CSI) between two antennas to enhance the Signal to Noise Ratio. To address the severe location dependency issue in wireless sensing with accurate theoretical underpinning and experiments, we propose a signal transformation mechanism to automatically convert signals based on the input position, achieving stable sensing performance. Comprehensive experiments demonstrate that WiMorse can achieve higher than 95% recognition accuracy for finger generated Morse code, and is robust against input position, environment changes, and user diversity
    • …
    corecore