14 research outputs found

    An Imperceptible Method to Monitor Human Activity by Using Sensor Data with CNN and Bi-directional LSTM

    Get PDF
    Deep learning (DL) algorithms have substantially increased research in recognizing day-to-day human activities All methods for recognizing human activities that are found through DL methods will only be useful if they work better in real-time applications.  Activities of elderly people need to be monitored to detect any abnormalities in their health and to suggest healthy life style based on their day to day activities. Most of the existing approaches used videos, static photographs for recognizing the activities. Those methods make the individual to feel anxious that they are being monitored. To address this limitation we utilized the cognitive outcomes of DL algorithms and used sensor data as input to the proposed model which is collected from smart home dataset for recognizing elderly people activity, without any interference in their privacy. At early stages human activities the input for human activity recognition by DL models are done using single sensor data which are static and lack in recognizing dynamic and multi sensor data. We propose a DL architecture based on the blend of deep Convolutional Neural Network (CNN) and Bi-directional Long Short-Term Memory (Bi-LSTM) in this research which replaces human intervention by automatically extracting features from multifunctional sensing devices to reliably recognize the activities. During the entire investigation process we utilized Tulum, a benchmark dataset that contains the logs of sensor data. We exhibit that our methodology outperforms by marking its accuracy as 98.76% and F1 score as 0.98

    Comparing Handcrafted Features and Deep Neural Representations for Domain Generalization in Human Activity Recognition

    Get PDF
    Human Activity Recognition (HAR) has been studied extensively, yet current approaches are not capable of generalizing across different domains (i.e., subjects, devices, or datasets) with acceptable performance. This lack of generalization hinders the applicability of these models in real-world environments. As deep neural networks are becoming increasingly popular in recent work, there is a need for an explicit comparison between handcrafted and deep representations in Out-of-Distribution (OOD) settings. This paper compares both approaches in multiple domains using homogenized public datasets. First, we compare several metrics to validate three different OOD settings. In our main experiments, we then verify that even though deep learning initially outperforms models with handcrafted features, the situation is reversed as the distance from the training distribution increases. These findings support the hypothesis that handcrafted features may generalize better across specific domains.publishe

    Domain Adaptation for Inertial Measurement Unit-based Human Activity Recognition: A Survey

    Full text link
    Machine learning-based wearable human activity recognition (WHAR) models enable the development of various smart and connected community applications such as sleep pattern monitoring, medication reminders, cognitive health assessment, sports analytics, etc. However, the widespread adoption of these WHAR models is impeded by their degraded performance in the presence of data distribution heterogeneities caused by the sensor placement at different body positions, inherent biases and heterogeneities across devices, and personal and environmental diversities. Various traditional machine learning algorithms and transfer learning techniques have been proposed in the literature to address the underpinning challenges of handling such data heterogeneities. Domain adaptation is one such transfer learning techniques that has gained significant popularity in recent literature. In this paper, we survey the recent progress of domain adaptation techniques in the Inertial Measurement Unit (IMU)-based human activity recognition area, discuss potential future directions

    Cross-position Activity Recognition with Stratified Transfer Learning

    Full text link
    Human activity recognition aims to recognize the activities of daily living by utilizing the sensors on different body parts. However, when the labeled data from a certain body position (i.e. target domain) is missing, how to leverage the data from other positions (i.e. source domain) to help learn the activity labels of this position? When there are several source domains available, it is often difficult to select the most similar source domain to the target domain. With the selected source domain, we need to perform accurate knowledge transfer between domains. Existing methods only learn the global distance between domains while ignoring the local property. In this paper, we propose a \textit{Stratified Transfer Learning} (STL) framework to perform both source domain selection and knowledge transfer. STL is based on our proposed \textit{Stratified} distance to capture the local property of domains. STL consists of two components: Stratified Domain Selection (STL-SDS) can select the most similar source domain to the target domain; Stratified Activity Transfer (STL-SAT) is able to perform accurate knowledge transfer. Extensive experiments on three public activity recognition datasets demonstrate the superiority of STL. Furthermore, we extensively investigate the performance of transfer learning across different degrees of similarities and activity levels between domains. We also discuss the potential applications of STL in other fields of pervasive computing for future research.Comment: Submit to Pervasive and Mobile Computing as an extension to PerCom 18 paper; First revision. arXiv admin note: substantial text overlap with arXiv:1801.0082
    corecore