31 research outputs found

    GaitFi: Robust Device-Free Human Identification via WiFi and Vision Multimodal Learning

    Get PDF

    Gait Recognition as a Service for Unobtrusive User Identification in Smart Spaces

    Get PDF
    Recently, Internet of Things (IoT) has raised as an important research area that combines the environmental sensing and machine learning capabilities to flourish the concept of smart spaces, in which intelligent and customized services can be provided to users in a smart manner. In smart spaces, one fundamental service that needs to be provided is accurate and unobtrusive user identification. In this work, to address this challenge, we propose a Gait Recognition as a Service (GRaaS) model, which is an instantiation of the traditional Sensing as a Service (S2aaS) model, and is specially deigned for user identification using gait in smart spaces. To illustrate the idea, a Radio Frequency Identification (RFID)-based gait recognition service is designed and implemented following the GRaaS concept. Novel tag selection algorithms and attention-based Long Short-term Memory (At-LSTM) models are designed to realize the device layer and edge layer, achieving a robust recognition with 96.3% accuracy. Extensive evaluations are provided, which show that the proposed service has accurate and robust performance and has great potential to support future smart space applications

    Contactless WiFi Sensing and Monitoring for Future Healthcare:Emerging Trends, Challenges and Opportunities

    Get PDF
    WiFi sensing has recently received significant interest from academics, industry, healthcare professionals and other caregivers (including family members) as a potential mechanism to monitor our aging population at distance, without deploying devices on users bodies. In particular, these methods have gained significant interest to efficiently detect critical events such as falls, sleep disturbances, wandering behavior, respiratory disorders, and abnormal cardiac activity experienced by vulnerable people. The interest in such WiFi-based sensing systems stems from its practical deployments in indoor settings and compliance from monitored persons, unlike other sensors such as wearables, camera-based, and acoustic-based solutions. This paper reviews state-of-the-art research on collecting and analysing channel state information, extracted using ubiquitous WiFi signals, describing a range of healthcare applications and identifying a series of open research challenges, untapped areas, and related trends.This work aims to provide an overarching view in understanding the technology and discusses its uses-cases from a perspective that considers hardware, advanced signal processing, and data acquisition

    Towards Domain-Independent and Real-Time Gesture Recognition Using mmWave Signal

    Full text link
    Human gesture recognition using millimeter wave (mmWave) signals provides attractive applications including smart home and in-car interface. While existing works achieve promising performance under controlled settings, practical applications are still limited due to the need of intensive data collection, extra training efforts when adapting to new domains (i.e. environments, persons and locations) and poor performance for real-time recognition. In this paper, we propose DI-Gesture, a domain-independent and real-time mmWave gesture recognition system. Specifically, we first derive the signal variation corresponding to human gestures with spatial-temporal processing. To enhance the robustness of the system and reduce data collecting efforts, we design a data augmentation framework based on the correlation between signal patterns and gesture variations. Furthermore, we propose a dynamic window mechanism to perform gesture segmentation automatically and accurately, thus enable real-time recognition. Finally, we build a lightweight neural network to extract spatial-temporal information from the data for gesture classification. Extensive experimental results show DI-Gesture achieves an average accuracy of 97.92%, 99.18% and 98.76% for new users, environments and locations, respectively. In real-time scenario, the accuracy of DI-Gesutre reaches over 97% with average inference time of 2.87ms, which demonstrates the superior robustness and effectiveness of our system.Comment: The paper is submitted to the journal of IEEE Transactions on Mobile Computing. And it is still under revie

    Synthetic Micro-Doppler Signatures of Non-Stationary Channels for the Design of Human Activity Recognition Systems

    Get PDF
    The main aim of this dissertation is to generate synthetic micro-Doppler signatures and TV-MDSs to train the HACs. This is achieved by developing non-stationary fixed-tofixed (F2F) indoor channel models. Such models provide an in-depth understanding of the channel parameters that influence the micro-Doppler signatures and TV-MDSs. Hence, the proposed non-stationary channel models help to generate the micro-Doppler signatures and the TV-MDSs, which fit those of the collected measurement data. First, we start with a simple two-dimensional (2D) non-stationary F2F channel model with fixed and moving scatterers. Such a model assumes that the moving scatterers are moving in 2D geometry with simple time variant (TV) trajectories and they have the same height as the transmitter and the receiver antennas. The model of the Doppler shifts caused by the moving scatterers in 2D space is provided. The micro-Doppler signature of this model is explored by employing the spectrogram of which a closed-form expression is derived. Moreover, we demonstrate how the TV-MDSs can be computed from the spectrograms. The aforementioned model is extended to provide two three-dimensional (3D) nonstationary F2F channel models. Such models allow simulating the micro-Doppler signatures influenced by the 3D trajectories of human activities, such as walking and falling. Moreover, expressions of the trajectories of these human activities are also given. Approximate solutions of the spectrograms of these channels are provided by approximating the Doppler shifts caused by the human activities into linear piecewise functions of time. The impact of these activities on the micro-Doppler signatures and the TV-MDSs of the simulated channel models is explored. The work done in this dissertation is not limited to analyzing micro-Doppler signatures and the TV-MDSs of the simulated channel models, but also includes those of the measured channels. The channel-state-information (CSI) software tool installed on commercial-off-theshelf (COTS) devices is utilized to capture complex channel transfer function (CTF) data under the influence of human activities. To mitigate the TV phase distortions caused by the clock asynchronization between the transmitter and receiver stations, a back-to-back (B2B) connection is employed. Models of the measured CTF and its true phases are also shown. The true micro-Doppler signatures and TV-MDSs of the measured CTF are analyzed. The results showed that the CSI tool is reliable to validate the proposed channel models. This allows the micro-Doppler signatures and the TV-MDSs extracted from the data collected with this tool to be used to train the HACs.publishedVersio

    Robust Audio and WiFi Sensing via Domain Adaptation and Knowledge Sharing From External Domains

    Get PDF
    Recent advancements in machine learning have initiated a revolution in embedded sensing and inference systems. Acoustic and WiFi-based sensing and inference systems have enabled a wide variety of applications ranging from home activity detection to health vitals monitoring. While many existing solutions paved the way for acoustic event recognition and WiFi-based activity detection, the diverse characteristics in sensors, systems, and environments used for data capture cause a shift in the distribution of data and thus results in sub-optimal classification performance when the sensor and environment discrepancy occurs between training and inference stage. Moreover, large-scale acoustic and WiFi data collection is non-trivial and cumbersome. Therefore, current acoustic and WiFi-based sensing systems suffer when there is a lack of labeled samples as they only rely on the provided training data. In this thesis, we aim to address the performance loss of machine learning-based classifiers for acoustic and WiFi-based sensing systems due to sensor and environment heterogeneity and lack of labeled examples. We show that discovering latent domains (sensor type, environment, etc.) and removing domain bias from machine learning classifiers make acoustic and WiFi-based sensing robust and generalized. We also propose a few-shot domain adaptation method that requires only one labeled sample for a new domain that relieves the users and developers from the painstaking task of data collection at each new domain. Furthermore, to address the lack of labeled examples, we propose to exploit the information or learned knowledge from sources where available data already exists in volumes, such as textual descriptions and visual domain. We implemented our algorithms in mobile and embedded platforms and collected data from participants to evaluate our proposed algorithms and frameworks in an extensive manner.Doctor of Philosoph

    Multimodal radar sensing for ambient assisted living

    Get PDF
    Data acquired from health and behavioural monitoring of daily life activities can be exploited to provide real-time medical and nursing service with affordable cost and higher efficiency. A variety of sensing technologies for this purpose have been developed and presented in the literature, for instance, wearable IMU (Inertial Measurement Unit) to measure acceleration and angular speed of the person, cameras to record the images or video sequence, PIR (Pyroelectric infrared) sensor to detect the presence of the person based on Pyroelectric Effect, and radar to estimate distance and radial velocity of the person. Each sensing technology has pros and cons, and may not be optimal for the tasks. It is possible to leverage the strength of all these sensors through information fusion in a multimodal fashion. The fusion can take place at three different levels, namely, i) signal level where commensurate data are combined, ii) feature level where feature vectors of different sensors are concatenated and iii) decision level where confidence level or prediction label of classifiers are used to generate a new output. For each level, there are different fusion algorithms, the key challenge here is mainly on choosing the best existing fusion algorithm and developing novel fusion algorithms that more suitable for the current application. The fundamental contribution of this thesis is therefore exploring possible information fusion between radar, primarily FMCW (Frequency Modulated Continuous Wave) radar, and wearable IMU, between distributed radar sensors, and between UWB impulse radar and pressure sensor array. The objective is to sense and classify daily activities patterns, gait styles and micro-gestures as well as producing early warnings of high-risk events such as falls. Initially, only “snapshot” activities (single activity within a short X-s measurement) have been collected and analysed for verifying the accuracy improvement due to information fusion. Then continuous activities (activities that are performed one after another with random duration and transitions) have been collected to simulate the real-world case scenario. To overcome the drawbacks of conventional sliding-window approach on continuous data, a Bi-LSTM (Bidirectional Long Short-Term Memory) network is proposed to identify the transitions of daily activities. Meanwhile, a hybrid fusion framework is presented to exploit the power of soft and hard fusion. Moreover, a trilateration-based signal level fusion method has been successfully applied on the range information of three UWB (Ultra-wideband) impulse radar and the results show comparable performance as using micro-Doppler signature, at the price of much less computation loads. For classifying ‘snapshot’ activities, fusion between radar and wearable shows approximately 12% accuracy improvement compared to using radar only, whereas for classifying continuous activities and gaits, our proposed hybrid fusion and trilateration-based signal level improves roughly 6.8% (before 89%, after 95.8%) and 7.3% (before 85.4%, after 92.7%), respectively

    Indoor localization using place and motion signatures

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from department-submitted PDF version of thesis.Includes bibliographical references (p. 141-153).Most current methods for 802.11-based indoor localization depend on either simple radio propagation models or exhaustive, costly surveys conducted by skilled technicians. These methods are not satisfactory for long-term, large-scale positioning of mobile devices in practice. This thesis describes two approaches to the indoor localization problem, which we formulate as discovering user locations using place and motion signatures. The first approach, organic indoor localization, combines the idea of crowd-sourcing, encouraging end-users to contribute place signatures (location RF fingerprints) in an organic fashion. Based on prior work on organic localization systems, we study algorithmic challenges associated with structuring such organic location systems: the design of localization algorithms suitable for organic localization systems, qualitative and quantitative control of user inputs to "grow" an organic system from the very beginning, and handling the device heterogeneity problem, in which different devices have different RF characteristics. In the second approach, motion compatibility-based indoor localization, we formulate the localization problem as trajectory matching of a user motion sequence onto a prior map. Our method estimates indoor location with respect to a prior map consisting of a set of 2D floor plans linked through horizontal and vertical adjacencies. To enable the localization system, we present a motion classification algorithm that estimates user motions from the sensors available in commodity mobile devices. We also present a route network generation method, which constructs a graph representation of all user routes from legacy floor plans. Given these inputs, our HMM-based trajectory matching algorithm recovers user trajectories. The main contribution is the notion of path compatibility, in which the sequential output of a classifier of inertial data producing low-level motion estimates (standing still, walking straight, going upstairs, turning left etc.) is examined for metric/topological/semantic agreement with the prior map. We show that, using only proprioceptive data of the quality typically available on a modern smartphone, our method can recover the user's location to within several meters in one to two minutes after a "cold start."by Jun-geun Park.Ph.D
    corecore