3 research outputs found

    Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors

    No full text
    The classification of human activity is becoming one of the most important areas of human health monitoring and physical fitness. With the use of physical activity recognition applications, people suffering from various diseases can be efficiently monitored and medical treatment can be administered in a timely fashion. These applications could improve remote services for health care monitoring and delivery. However, the fixed health monitoring devices provided in hospitals limits the subjects’ movement. In particular, our work reports on wearable sensors that provide remote monitoring that periodically checks human health through different postures and activities to give people timely and effective treatment. In this paper, we propose a novel human activity recognition (HAR) system with multiple combined features to monitor human physical movements from continuous sequences via tri-axial inertial sensors. The proposed HAR system filters 1D signals using a notch filter that examines the lower/upper cutoff frequencies to calculate the optimal wearable sensor data. Then, it calculates multiple combined features, i.e., statistical features, Mel Frequency Cepstral Coefficients, and Gaussian Mixture Model features. For the classification and recognition engine, a Decision Tree classifier optimized by the Binary Grey Wolf Optimization algorithm is proposed. The proposed system is applied and tested on three challenging benchmark datasets to assess the feasibility of the model. The experimental results show that our proposed system attained an exceptional level of performance compared to conventional solutions. We achieved accuracy rates of 88.25%, 93.95%, and 96.83% over MOTIONSENSE, MHEALTH, and the proposed self-annotated IM-AccGyro human-machine dataset, respectively

    Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier

    No full text
    Human behavior modeling (HBM) is a challenging classification task for researchers seeking to develop sustainable systems that precisely monitor and record human life-logs. In recent years, several models have been proposed; however, HBM remains an inspiring problem that is only partly solved. This paper proposes a novel framework of human behavior modeling based on wearable inertial sensors; the system framework is composed of data acquisition, feature extraction, optimization and classification stages. First, inertial data is filtered via three different filters, i.e., Chebyshev, Elliptic and Bessel filters. Next, six different features from time and frequency domains are extracted to determine the maximum optimal values. Then, the Probability Based Incremental Learning (PBIL) optimizer and the K-Ary tree hashing classifier are applied to model different human activities. The proposed model is evaluated on two benchmark datasets, namely DALIAC and PAMPA2, and one self-annotated dataset, namely, IM-LifeLog, respectively. For evaluation, we used a leave-one-out cross validation scheme. The experimental results show that our model outperformed existing state-of-the-art methods with accuracy rates of 94.23%, 94.07% and 96.40% over DALIAC, PAMPA2 and IM-LifeLog datasets, respectively. The proposed system can be used in healthcare, physical activity detection, surveillance systems and medical fitness fields

    Depth Sensors-Based Action Recognition Using a Modified K-Ary Entropy Classifier

    No full text
    Surveillance system is acquiring an ample interest in the field of computer vision. Existing surveillance system usually relies on optical or wearable sensors for indoor and outdoor activities. These sensors give reasonable performance in a simulation environment. However, when used under realistic settings, they could cause a large number of false alarms. Moreover, in a real-world scenario, positioning a depth camera at too great a distance from the subject could compromise image quality and result in the loss of depth information. Furthermore, depth information in RGB images may be lost when converting a 3D image to a 2D image. Therefore, extensive surveillance system research is moving on fused sensors, which has greatly improved action recognition performance. By taking into account the concept of fused sensors, this paper proposed a novel idea of a modified K-Ary entropy classifier algorithm to map the arbitrary size of vectors to a fixed-size subtree pattern for graph classification and to solve complex feature selection and classification problems using RGB-D data. The main aim of this paper is to increase the space between the intra-substructure nodes of a tree through entropy accumulation. Hence, the likelihood of classifying the minority class as belonging to the majority class has been reduced. The working of the proposed model has been described as follows: First, the depth and RGB images from three benchmark datasets have been taken as the input for the model. Then, using 2.5D cloud point modeling and ridge extraction, full-body features, and point-based features have been retrieved. Finally, for the efficacy of the surveillance system, a modified K-Ary entropy accumulation classifier is optimized by the probability-based incremental learning (PBIL) algorithm has been used. In both qualitative and quantitative experimental results, the testing results have shown 95.05%, 95.56%, and 95.08% performance over SYSU-ACTION, PRECIS HAR, and Northwestern-UCLA (N-UCLA) datasets. The proposed system could apply to various real-world emerging applications like human target tracking, security-critical human event detection, perimeter security, internet security, public safety etc
    corecore