48,411 research outputs found

    Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

    Get PDF
    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation

    A Fast Deep Learning Technique for Wi-Fi-Based Human Activity Recognition

    Get PDF
    Despite recent advances, fast and reliable Human Activity Recognition in confined space is still an open problem related to many real-world applications, especially in health and biomedical monitoring. With the ubiquitous presence of Wi-Fi networks, the activity recognition and classification problems can be solved by leveraging some characteristics of the Channel State Information of the 802.11 standard. Given the well-documented advantages of Deep Learning algorithms in solving complex pattern recognition problems, many solutions in Human Activity Recognition domain are taking advantage of those models. To improve the time and precision of activity classification of time-series data stemming from Channel State Information, we propose herein a fast deep neural model encompassing concepts not only from state-of-the-art recurrent neural networks, but also using convolutional operators with added randomization. Results from real data in an experimental environment show promising results

    Human Activity Recognition Using Deep Learning Networks with Enhanced Channel State Information

    Full text link
    © 2018 IEEE. Channel State Information (CSI) is widely used for device free human activity recognition. Feature extraction remains as one of the most challenging tasks in a dynamic and complex environment. In this paper, we propose a human activity recognition scheme using Deep Learning Networks with enhanced Channel State information (DLN-eCSI). We develop a CSI feature enhancement scheme (CFES), including two modules of background reduction and correlation feature enhancement, for preprocessing the data input to the DLN. After cleaning and compressing the signals using CFES, we apply the recurrent neural networking (RNN) to automatically extract deeper features and then the softmax regression algorithm for activity classification. Extensive experiments are conducted to validate the effectiveness of the proposed scheme

    Human Action Recognition in Videos using Convolution Long Short-Term Memory Network with Spatio-Temporal Networks

    Get PDF
    Two-stream convolutional networks plays an essential role as a powerful feature extractor in human action recognition in videos. Recent studies have shown the importance of two-stream Convolutional Neural Networks (CNN) to recognize human action recognition. Recurrent Neural Networks (RNN) has achieved the best performance in video activity recognition combining CNN. Encouraged by CNN's results with RNN, we present a two-stream network with two CNNs and Convolution Long-Short Term Memory (CLSTM). First, we extricate Spatio-temporal features using two CNNs using pre-trained ImageNet models. Second, the results of two CNNs from step one are combined and fed as input to the CLSTM to get the overall classification score. We also explored the various fusion function performance that combines two CNNs and the effects of feature mapping at different layers. And, conclude the best fusion function along with layer number. To avoid the problem of overfitting, we adopt the data augmentation techniques. Our proposed model demonstrates a substantial improvement compared to the current two-stream methods on the benchmark datasets with 70.4% on HMDB-51 and 95.4% on UCF-101 using the pre-trained ImageNet model. Doi: 10.28991/esj-2021-01254 Full Text: PD

    Human Activity Classification Using Radar Signal and RNN Networks

    Get PDF
    Radar-based human activities recognition is still an open problem and is a key to detect anomalous behaviour for security and health applications. Deep learning networks such as convolutional neural networks (CNN) have been proposed for such tasks and showed better performance than traditional supervised learning paradigm. However, it is hard to deploy CNN networks to embedded systems due to the limited computational power available. From this point of concern, the use of a recurrent neural network (RNN) is proposed in this paper for human activities classification. We also propose an innovative data argumentation method to train the neural network using a limited number of data. The experiment shows that our network can achieve a mean accuracy of 94.3% in human activity classification
    • …
    corecore