14 research outputs found

    Single Input Single Head CNN-GRU-LSTM Architecture for Recognition of Human Activities

    Get PDF
    Due to its applications for the betterment of human life, human activity recognition has attracted more researchers in the recent past. Anticipation of intension behind the motion and behaviour recognition are intensive applications for research inside human activity recognition. Gyroscope, accelerometer, and magnetometer sensors are heavily used to obtain the data in time series for every timestep. The selection of temporal features is required for the successful recognition of human motion primitives. Different data pre-processing and feature extraction techniques were used in most past approaches with the constraint of sufficient domain knowledge. These approaches are heavily dependent on the quality of handcrafted features and are also time-consuming and not generalized. In this paper, a single head deep neural network-based approach with the combination of a convolutional neural network, Gated recurrent unit, and Long Short Term memory is proposed. The raw data from wearable sensors are used with minimum pre-processing steps and without the involvement of any feature extraction method. 93.48 % and 98.51% accuracy are obtained on UCI-HAR and WISDM datasets. This single-head deep neural network-based model shows higher classification performance over other architectures under deep neural networks

    Human physical activity recognition algorithm based on smartphone data convolutional neural network and long short time memory

    Get PDF
    A deep learning framework for activity recognition based on smartphone acceleration sensor data, convolutional neural network (CNN) and long short-term memory (LSTM) is proposed in the paper. The proposed framework aims to improve the accuracy of human activity recognition (HAR) by combining the strengths of CNN and LSTM. The CNN is used to extract features from the acceleration data and the LSTM is used to model the temporal dependencies of the features. The proposed framework is evaluated on the publicly available dataset, it includes 6 different actions: walking, walking upstairs, walking downstairs, sitting, standing, laying. The physical activity recognition accuracy has reached 94 %

    Exploration of Deep Learning Models for Video Based Multiple Human Activity Recognition

    Get PDF
    Human Activity Recognition (HAR) with Deep Learning is a challenging and a highly demanding classification task. Complexity of the activity detection and the number of subjects are the main issues. Data mining approaches improved decision-making performance. This work presents one such model for Human activity recognition for multiple subjects carrying out multiple activities. Involving real time datasets, the work developed a rapid algorithm for minimizing the problems of neural networks classifier. An optimal feature extraction happens and develops a multi-modal classification technique and predicts solutions with better accuracy when compared to other traditional methods. This paper discussing on HAR prediction in four phases namely (i) Depthwise Separable Convolution with BiLSTM (DSC-BLSTM); (ii) Enhanced Bidirectional Grated Recurrent Unit with Long Short Term Memory (BGRU-LSTM); (iii) Enhanced TimeSformer Model with Multi-Layer Perceptron Neural Networks classification and (iv) Filtering Single Activity Recognition are described.In comparison to previous efforts like the DSC-BLSTM and BGRU-LSTM classifications, the experimental result of the ETMLP classification attained 98.90%, which was more efficient. The end outcome revealed that the new model performed better in terms of accuracy than the other models

    Outlier Detection in Wearable Sensor Data for Human Activity Recognition (HAR) Based on DRNNs

    Get PDF
    Wearable sensors provide a user-friendly and non-intrusive mechanism to extract user-relateddata that paves the way to the development of personalized applications. Within those applications, humanactivity recognition (HAR) plays an important role in the characterization of the user context. Outlierdetection methods focus on finding anomalous data samples that are likely to have been generated by adifferent mechanism. This paper combines outlier detection and HAR by introducing a novel algorithmthat is able both to detect information from secondary activities inside the main activity and to extract datasegments of a particular sub-activity from a different activity. Several machine learning algorithms havebeen previously used in the area of HAR based on the analysis of the time sequences generated by wearablesensors. Deep recurrent neural networks (DRNNs) have proven to be optimally adapted to the sequentialcharacteristics of wearable sensor data in previous studies. A DRNN-based algorithm is proposed in thispaper for outlier detection in HAR. The results are validated both for intra- and inter-subject cases and bothfor outlier detection and sub-activity recognition using two different datasets. A first dataset comprising4 major activities (walking, running, climbing up, and down) from 15 users is used to train and validatethe proposal. Intra-subject outlier detection is able to detect all major outliers in the walking activity in thisdataset, while inter-subject outlier detection only fails for one participant executing the activity in a peculiarway. Sub-activity detection has been validated by finding out and extracting walking segments present inthe other three activities in this dataset. A second dataset using four different users, a different setting anddifferent sensor devices is used to assess the generalization of results.This work was supported by the ‘‘ANALYTICS USING SENSOR DATA FOR FLATCITY’’ Project (MINECO/ ERDF, EU) funded in partby the Spanish Agencia Estatal de Investigación (AEI) under Grant TIN2016-77158-C4-1-R and in part by the European RegionalDevelopment Fund (ERDF)
    corecore