50 research outputs found

    Feature fusion H-ELM based learned features and hand-crafted features for human activity recognition

    Get PDF
    Recognizing human activities is one of the main goals of human-centered intelligent systems. Smartphone sensors produce a continuous sequence of observations. These observations are noisy, unstructured and high dimensional. Therefore, efficient features have to be extracted in order to perform accurate classification. This paper proposes a combination of Hierarchical and kernel Extreme Learning Machine (HK-ELM) methods to learn features and map them to specific classes in a short time. Moreover, a feature fusion approach is proposed to combine H-ELM based learned features with hand-crafted ones. Our proposed method was found to outperform state-of-the-art in terms of accuracy and training time. It gives accuracy of 97.62 % and takes 3.4 seconds as a training time by using a normal Central Processing Unit (CPU)

    Single Input Single Head CNN-GRU-LSTM Architecture for Recognition of Human Activities

    Get PDF
    Due to its applications for the betterment of human life, human activity recognition has attracted more researchers in the recent past. Anticipation of intension behind the motion and behaviour recognition are intensive applications for research inside human activity recognition. Gyroscope, accelerometer, and magnetometer sensors are heavily used to obtain the data in time series for every timestep. The selection of temporal features is required for the successful recognition of human motion primitives. Different data pre-processing and feature extraction techniques were used in most past approaches with the constraint of sufficient domain knowledge. These approaches are heavily dependent on the quality of handcrafted features and are also time-consuming and not generalized. In this paper, a single head deep neural network-based approach with the combination of a convolutional neural network, Gated recurrent unit, and Long Short Term memory is proposed. The raw data from wearable sensors are used with minimum pre-processing steps and without the involvement of any feature extraction method. 93.48 % and 98.51% accuracy are obtained on UCI-HAR and WISDM datasets. This single-head deep neural network-based model shows higher classification performance over other architectures under deep neural networks

    Human activity recognition for static and dynamic activity using convolutional neural network

    Get PDF
    Evaluated activity as a detail of the human physical movement has become a leading subject for researchers. Activity recognition application is utilized in several areas, such as living, health, game, medical, rehabilitation, and other smart home system applications. An accelerometer was popular sensors to recognize the activity, as well as a gyroscope, which can be embedded in a smartphone. Signal was generated from the accelerometer as a time-series data is an actual approach like a human actifvity pattern. Motion data have acquired in 30 volunteers. Dynamic actives (walking, walking upstairs, walking downstairs) as DA and static actives (laying, standing, sitting) as SA were collected from volunteers. SA and DA it's a challenging problem with the different signal patterns, SA signals coincide between activities but with a clear threshold, otherwise the DA signal is clearly distributed but with an adjacent upper threshold. The proposed network structure achieves a significant performance with the best overall accuracy of 97%. The result indicated the ability of the model for human activity recognition purposes

    Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition

    Full text link
    The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition, outperforming the traditional signal processing and traditional machine learning approaches. In this work, by performing extensive experimental studies on two human activity recognition datasets, we investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms (such as contrastive learning), and various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.Comment: Seventh International Conference on Internet of Things and Applications (IoT 2023

    Triaxial accelerometer-based human activity recognition using 1D convolution neural network

    Get PDF
    Deep learning has been instrumental for human activity recognition (HAR). In spite of its strong potential, significant challenges exist, wherein the real case, deep learning model requires a massive dataset for training. However, existing research require an improvement to classify static and dynamic activity with more significant achievement. To address such challenges, we proposed a model utilizing 1-dimensional Convolution Neural Network (CNN) to classify static and dynamic activity using public dataset. The proposed scheme in this study has been conducted (through experiments), in which the result denotes the state-of-the-art methods, obtaining better performance than others

    Deep human activity recognition using wearable sensors

    Get PDF
    This paper addresses the problem of classifying motion signals acquired via wearable sensors for the recognition of human activity. Automatic and accurate classification of motion signals is important in facilitating the development of an effective automated health monitoring system for the elderlies. Thus, we gathered hip motion signals from two different waist mounted sensors and for each individual sensor, we converted the motion signal into spectral image sequence. We use these images as inputs to independently train two Convolutional Neural Networks (CNN), one for each of the generated image sequences from the two sensors. The outputs of the trained CNNs are then fused together to predict the final class of the human activity. We evaluate the performance of the proposed method using the cross-subjects testing approach. Our method achieves recognition accuracy (F1 score) of 0.87 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset
    corecore