1,026 research outputs found

    GaitFi: Robust Device-Free Human Identification via WiFi and Vision Multimodal Learning

    Get PDF

    AutoFi: Towards Automatic WiFi Human Sensing via Geometric Self-Supervised Learning

    Full text link
    WiFi sensing technology has shown superiority in smart homes among various sensors for its cost-effective and privacy-preserving merits. It is empowered by Channel State Information (CSI) extracted from WiFi signals and advanced machine learning models to analyze motion patterns in CSI. Many learning-based models have been proposed for kinds of applications, but they severely suffer from environmental dependency. Though domain adaptation methods have been proposed to tackle this issue, it is not practical to collect high-quality, well-segmented and balanced CSI samples in a new environment for adaptation algorithms, but randomly-captured CSI samples can be easily collected. {\color{black}In this paper, we firstly explore how to learn a robust model from these low-quality CSI samples, and propose AutoFi, an annotation-efficient WiFi sensing model based on a novel geometric self-supervised learning algorithm.} The AutoFi fully utilizes unlabeled low-quality CSI samples that are captured randomly, and then transfers the knowledge to specific tasks defined by users, which is the first work to achieve cross-task transfer in WiFi sensing. The AutoFi is implemented on a pair of Atheros WiFi APs for evaluation. The AutoFi transfers knowledge from randomly collected CSI samples into human gait recognition and achieves state-of-the-art performance. Furthermore, we simulate cross-task transfer using public datasets to further demonstrate its capacity for cross-task learning. For the UT-HAR and Widar datasets, the AutoFi achieves satisfactory results on activity recognition and gesture recognition without any prior training. We believe that the AutoFi takes a huge step toward automatic WiFi sensing without any developer engagement.Comment: The paper has been accepted by IEEE Internet of Things Journa

    The Emerging Internet of Things Marketplace From an Industrial Perspective: A Survey

    Get PDF
    The Internet of Things (IoT) is a dynamic global information network consisting of internet-connected objects, such as Radio-frequency identification (RFIDs), sensors, actuators, as well as other instruments and smart appliances that are becoming an integral component of the future internet. Over the last decade, we have seen a large number of the IoT solutions developed by start-ups, small and medium enterprises, large corporations, academic research institutes (such as universities), and private and public research organisations making their way into the market. In this paper, we survey over one hundred IoT smart solutions in the marketplace and examine them closely in order to identify the technologies used, functionalities, and applications. More importantly, we identify the trends, opportunities and open challenges in the industry-based the IoT solutions. Based on the application domain, we classify and discuss these solutions under five different categories: smart wearable, smart home, smart, city, smart environment, and smart enterprise. This survey is intended to serve as a guideline and conceptual framework for future research in the IoT and to motivate and inspire further developments. It also provides a systematic exploration of existing research and suggests a number of potentially significant research directions.Comment: IEEE Transactions on Emerging Topics in Computing 201

    MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing

    Full text link
    4D human perception plays an essential role in a myriad of applications, such as home automation and metaverse avatar simulation. However, existing solutions which mainly rely on cameras and wearable devices are either privacy intrusive or inconvenient to use. To address these issues, wireless sensing has emerged as a promising alternative, leveraging LiDAR, mmWave radar, and WiFi signals for device-free human sensing. In this paper, we propose MM-Fi, the first multi-modal non-intrusive 4D human dataset with 27 daily or rehabilitation action categories, to bridge the gap between wireless sensing and high-level human perception tasks. MM-Fi consists of over 320k synchronized frames of five modalities from 40 human subjects. Various annotations are provided to support potential sensing tasks, e.g., human pose estimation and action recognition. Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks. We envision that MM-Fi can contribute to wireless sensing research with respect to action recognition, human pose estimation, multi-modal learning, cross-modal supervision, and interdisciplinary healthcare research.Comment: The paper has been accepted by NeurIPS 2023 Datasets and Benchmarks Track. Project page: https://ntu-aiot-lab.github.io/mm-f

    SenseFi: A library and benchmark on deep-learning-empowered WiFi human sensing

    Get PDF
    Over the recent years, WiFi sensing has been rapidly developed for privacy-preserving, ubiquitous human-sensing applications, enabled by signal processing and deep-learning methods. However, a comprehensive public benchmark for deep learning in WiFi sensing, similar to that available for visual recognition, does not yet exist. In this article, we review recent progress in topics ranging from WiFi hardware platforms to sensing algorithms and propose a new library with a comprehensive benchmark, SenseFi. On this basis, we evaluate various deep-learning models in terms of distinct sensing tasks, WiFi platforms, recognition accuracy, model size, computational complexity, and feature transferability. Extensive experiments are performed whose results provide valuable insights into model design, learning strategy, and training techniques for real-world applications. In summary, SenseFi is a comprehensive benchmark with an open-source library for deep learning in WiFi sensing research that offers researchers a convenient tool to validate learning-based WiFi-sensing methods on multiple datasets and platforms.Nanyang Technological UniversityPublished versionThis research is supported by NTU Presidential Postdoctoral Fellowship, ‘‘Adaptive Multi-modal Learning for Robust Sensing and Recognition in Smart Cities’’ project fund (020977-00001), at the Nanyang Technological University, Singapore

    IoT driven ambient intelligence architecture for indoor intelligent mobility

    Get PDF
    Personal robots are set to assist humans in their daily tasks. Assisted living is one of the major applications of personal assistive robots, where the robots will support health and wellbeing of the humans in need, especially elderly and disabled. Indoor environments are extremely challenging from a robot perception and navigation point of view, because of the ever-changing decorations, internal organizations and clutter. Furthermore, human-robot-interaction in personal assistive robots demands intuitive and human-like intelligence and interactions. Above challenges are aggravated by stringent and often tacit requirements surrounding personal privacy that may be invaded by continuous monitoring through sensors. Towards addressing the above problems, in this paper we present an architecture for "Ambient Intelligence" for indoor intelligent mobility by leveraging IoTs within a framework of Scalable Multi-layered Context Mapping Framework. Our objective is to utilize sensors in home settings in the least invasive manner for the robot to learn about its dynamic surroundings and interact in a human-like manner. The paper takes a semi-survey approach to presenting and illustrating preliminary results from our in-house built fully autonomous electric quadbike

    Major requirements for building Smart Homes in Smart Cities based on Internet of Things technologies

    Get PDF
    The recent boom in the Internet of Things (IoT) will turn Smart Cities and Smart Homes (SH) from hype to reality. SH is the major building block for Smart Cities and have long been a dream for decades, hobbyists in the late 1970s made Home Automation (HA) possible when personal computers started invading home spaces. While SH can share most of the IoT technologies, there are unique characteristics that make SH special. From the result of a recent research survey on SH and IoT technologies, this paper defines the major requirements for building SH. Seven unique requirement recommendations are defined and classified according to the specific quality of the SH building blocks

    EYECOM: an innovative approach for computer interaction

    Get PDF
    The world is innovating rapidly, and there is a need for continuous interaction with the technology. Sadly, there do not exist promising options for paralyzed people to interact with the machines i.e., laptops, smartphones, and tabs. A few commercial solutions such as Google Glasses are costly and cannot be afforded by every paralyzed person for such interaction. Towards this end, the thesis proposes a retina-controlled device called EYECOM. The proposed device is constructed from off-the-shelf cost-effective yet robust IoT devices (i.e., Arduino microcontrollers, Xbee wireless sensors, IR diodes, and accelerometer). The device can easily be mounted on to the glasses; the paralyzed person using this device can interact with the machine using simple head movement and eye blinks. The IR detector is located in front of the eye to illuminate the eye region. As a result of illumination, the eye reflects IR light which includes electrical signals and as the eyelids close, the reflected light over eye surface is disrupted, and such change in reflected value is recorded. Further to enable cursor movement onto the computer screen for the paralyzed person a device named accelerometer is used. The accelerometer is a small device, with the size of phalanges, a human thumb bone. The device operates on the principle of axis-based motion sensing and it can be worn as a ring by a paralyzed person. A microcontroller processes the inputs from the IR sensors, accelerometer and transmits them wirelessly via Xbee wireless sensor (i.e., a radio) to another microcontroller attached to the computer. With the help of a proposed algorithm, the microcontroller attached to the computer, on receiving the signals moves cursor onto the computer screen and facilitate performing actions, as simple as opening a document to operating a word-to-speech software. EYECOM has features which can help paralyzed persons to continue their contributions towards the technological world and become an active part of the society. Resultantly, they will be able to perform number of tasks without depending upon others from as simple as reading a newspaper on the computer to activate word-to-voice software
    • …
    corecore