5 research outputs found

    When learning meets RFIDs: The case of activity identification

    Get PDF
    Over the past decades have seen booming interests in human activity identification that is widely used in a range of Internet-of-Things applications, such as healthcare and smart homes. It has attracted significant attention from both academia and industry, with a wide range of solutions based on cameras, radars, and/or various inertial sensors. They generally require the object of identification to carry sensors/wireless transceivers, which are not negligible in both size and weight, not to mention the constraints from the battery. Radio frequency identification (RFID) is a promising technology that can overcome those difficulties due to its low cost, small form size, and batterylessness, making it widely used in a range of mobile applications. The information offered by today\u27s RFID tags however are quite limited, and the typical raw data (RSSI and phase angles) are not necessarily good indicators of human activities (being either insensitive or unreliable as revealed by our realworld experiments). As such, existing RFID-based activity identification solutions are far from being satisfactory. It is also well known that the accuracy of the readings can be noticeably affected by multipath, which unfortunately is inevitable in an indoor environment and is complicated with multiple reference tags. In this thesis, we first reviewed the literature and research challenges of multipath effects in activity identification with RFIDs. Then we introduced three advanced RFID learning-based activity identification frameworks, i.e., i2tag, TagFree and M2AI, for tag mobility profiling, RFID-based device-free activity identification and tag-attached multi-object activity identification, respectively. Our extensive experiments further demonstrate their superiority on activity identification in the multipath-rich environments

    Classification of skateboarding tricks by synthesizing transfer learning models and machine learning classifiers using different input signal transformations

    Get PDF
    Skateboarding has made its Olympic debut at the delayed Tokyo 2020 Olympic Games. Conventionally, in the competition scene, the scoring of the game is done manually and subjectively by the judges through the observation of the trick executions. Nevertheless, the complexity of the manoeuvres executed has caused difficulties in its scoring that is obviously prone to human error and bias. Therefore, the aim of this study is to classify five skateboarding flat ground tricks which are Ollie, Kickflip, Shove-it, Nollie and Frontside 180. This is achieved by using three optimized machine learning models of k-Nearest Neighbor (kNN), Random Forest (RF), and Support Vector Machine (SVM) from features extracted via eighteen transfer learning models. Six amateur skaters performed five tricks on a customized ORY skateboard. The raw data from the inertial measurement unit (IMU) embedded on the developed device attached to the skateboarding were extracted. It is worth noting that four types of input images were transformed via Fast Fourier Transform (FFT), Continuous Wavelet Transform (CWT), Discrete Wavelet Transform (DWT) and synthesized raw image (RAW) from the IMU-based signals obtained. The optimized form of the classifiers was obtained by performing GridSearch optimization technique on the training dataset with 3-folds cross-validation on a data split of 4:1:1 ratio for training, validation and testing, respectively from 150 transformed images. It was shown that the CWT and RAW images used in the MobileNet transfer learning model coupled with the optimized SVM and RF classifiers exhibited a test accuracy of 100%. In order to identify the best possible method for the pipelines, computational time was used to evaluate the various models. It was concluded that the RAW-MobileNet-optimized-RF approach was the most effective one, with a computational time of 24.796875 seconds. The results of the study revealed that the proposed approach could improve the classification of skateboarding tricks

    Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges and Opportunities

    Full text link
    The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some insights for future directions
    corecore