5 research outputs found

    Single Input Single Head CNN-GRU-LSTM Architecture for Recognition of Human Activities

    Get PDF
    Due to its applications for the betterment of human life, human activity recognition has attracted more researchers in the recent past. Anticipation of intension behind the motion and behaviour recognition are intensive applications for research inside human activity recognition. Gyroscope, accelerometer, and magnetometer sensors are heavily used to obtain the data in time series for every timestep. The selection of temporal features is required for the successful recognition of human motion primitives. Different data pre-processing and feature extraction techniques were used in most past approaches with the constraint of sufficient domain knowledge. These approaches are heavily dependent on the quality of handcrafted features and are also time-consuming and not generalized. In this paper, a single head deep neural network-based approach with the combination of a convolutional neural network, Gated recurrent unit, and Long Short Term memory is proposed. The raw data from wearable sensors are used with minimum pre-processing steps and without the involvement of any feature extraction method. 93.48 % and 98.51% accuracy are obtained on UCI-HAR and WISDM datasets. This single-head deep neural network-based model shows higher classification performance over other architectures under deep neural networks

    WSense: A Robust Feature Learning Module for Lightweight Human Activity Recognition

    Full text link
    In recent times, various modules such as squeeze-and-excitation, and others have been proposed to improve the quality of features learned from wearable sensor signals. However, these modules often cause the number of parameters to be large, which is not suitable for building lightweight human activity recognition models which can be easily deployed on end devices. In this research, we propose a feature learning module, termed WSense, which uses two 1D CNN and global max pooling layers to extract similar quality features from wearable sensor data while ignoring the difference in activity recognition models caused by the size of the sliding window. Experiments were carried out using CNN and ConvLSTM feature learning pipelines on a dataset obtained with a single accelerometer (WISDM) and another obtained using the fusion of accelerometers, gyroscopes, and magnetometers (PAMAP2) under various sliding window sizes. A total of nine hundred sixty (960) experiments were conducted to validate the WSense module against baselines and existing methods on the two datasets. The results showed that the WSense module aided pipelines in learning similar quality features and outperformed the baselines and existing models with a minimal and uniform model size across all sliding window segmentations. The code is available at https://github.com/AOige/WSense

    A novel feature relearning method for automatic sleep staging based on single-channel EEG

    Get PDF
    Correctly identifying sleep stages is essential for assessing sleep quality and treating sleep disorders. However, the current sleep staging methods have the following problems: (1) Manual or semi-automatic extraction of features requires professional knowledge, which is time-consuming and laborious. (2) Due to the similarity of stage features, it is necessary to strengthen the learning of features. (3) Acquisition of a variety of data has high requirements on equipment. Therefore, this paper proposes a novel feature relearning method for automatic sleep staging based on single-channel electroencephalography (EEG) to solve these three problems. Specifically, we design a bottom–up and top–down network and use the attention mechanism to learn EEG information fully. The cascading step with an imbalanced strategy is used to further improve the overall classification performance and realize automatic sleep classification. The experimental results on the public dataset Sleep-EDF show that the proposed method is advanced. The results show that the proposed method outperforms the state-of-the-art methods. The code and supplementary materials are available at GitHub: https://github.com/raintyj/A-novel-feature-relearning-method

    Unsupervised Domain Adaptation for Estimating Occupancy and Recognizing Activities in Smart Buildings

    Get PDF
    Activities Recognition (AR) and Occupancy Estimation (OE) are topics of current interest. AR and OE can develop many smart building applications such as energy management and can help provide good services for residents. Prior research on AR and OE has typically focused on supervised machine learning methods. For a specific smart building domain, a model is trained using data collected from the current environment (domain). The created model will not generalize well when evaluated in a new related domain due to data distribution differences. Creating a model for each smart building environment is infeasible due to the lack of labeled data. Indeed, data collection is a tedious and time-consuming task. Unsupervised Domain Adaptation (UDA) is a good solution for the considered case. UDA solves the problem of the lack of labeled data in the target domain by allowing knowledge transfer across domains. In this research, we provide several UDA methods that mitigate the data distribution shift between source and target domains using unlabeled target data for OE and AR with and without direct access to labeled source data. Firstly, we consider techniques that use only a trained source model instead of a huge amount of labeled source data to make domain adaptation. We adapted and tested several UDA methods such as Source HypOthesis Transfer (SHOT), Higher-Order Moment Matching (HoMM), and Source data Free Domain Adaptation (SFDA) on smart building data. Secondly, we adapt and develop several UDA methods that use labeled source data to estimate the number of occupants and recognize activities. The developed methods that have direct access to the source data are the Virtual Adversarial Domain Adaptation (VADA), Sliced Wasserstein Discrepancy (SWD), and Adaptive Feature Norm (AFN). Finally, we make a comparative analysis between several newly adapted deep UDA methods, applied to the tasks of AR and OE, with and without access to labeled source data
    corecore