1,685 research outputs found

    Sensor-activity relevance in human activity recognition with wearable motion sensors and mutual information criterion

    Get PDF
    Selecting a suitable sensor configuration is an important aspect of recognizing human activities with wearable motion sensors. This problem encompasses selecting the number and type of the sensors, configuring them on the human body, and identifying the most informative sensor axes. In earlier work, researchers have used customized sensor configurations and compared their activity recognition rates with those of others. However, the results of these comparisons are dependent on the feature sets and the classifiers employed. In this study, we propose a novel approach that utilizes the time-domain distributions of the raw sensor measurements. We determine the most informative sensor types (among accelerometers, gyroscopes, and magnetometers), sensor locations (among torso, arms, and legs), and measurement axes (among three perpendicular coordinate axes at each sensor) based on the mutual information criterion. © 2013 Springer International Publishing

    Human activity recognition using wearable sensors: a deep learning approach

    Get PDF
    In the past decades, Human Activity Recognition (HAR) grabbed considerable research attentions from a wide range of pattern recognition and human–computer interaction researchers due to its prominent applications such as smart home health care. The wealth of information requires efficient classification and analysis methods. Deep learning represents a promising technique for large-scale data analytics. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among them, physical human activity recognition through wearable sensors provides valuable information about an individual’s degree of functional ability and lifestyle. There is abundant research that works upon real time processing and causes more power consumption of mobile devices. Mobile phones are resource-limited devices. It is a thought-provoking task to implement and evaluate different recognition systems on mobile devices. This work proposes a Deep Belief Network (DBN) model for successful human activity recognition. Various experiments are performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN performs competitively in comparison with other algorithms and achieves satisfactory activity recognition performance. Some open problems and ideas are also presented and should be investigated as future research

    Mechanical lifting energy consumption in work activities designed by means of the "revised NIOSH lifting equation"\u80\u9d

    Get PDF
    The aims of the present work were: to calculate lifting energy consumption (LEC) in work activities designed to have a growing lifting index (LI) by means of revised NIOSH lifting equation; to evaluate the relationship between LEC and forces at the L5-S1 joint. The kinematic and kinetic data of 20 workers were recorded during the execution of lifting tasks in three conditions. We computed kinetic, potential and mechanical energy and the corresponding LEC by considering three different centers of mass of: 1) the load (CoML); 2) the multi-segment upper body model and load together (CoMUpp+L); 3) the whole body and load together (CoMTot). We also estimated compression and shear forces. Results shows that LEC calculated for CoMUpp+L and CoMTot grew significantly with the LI and that all the lifting condition pairs are discriminated. The correlation analysis highlighted a relationship between LEC and forces that determine injuries at the L5-S1 joint

    A data fusion-based hybrid sensory system for older people’s daily activity recognition.

    Get PDF
    Population aged 60 and over is growing faster. Ageing-caused changes, such as physical or cognitive decline, could affect people’s quality of life, resulting in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) has become one of the most promising assistive technologies for older people’s daily life. Literature in HAR suggests that each sensor modality has its strengths and limitations and single sensor modalities may not cope with complex situations in practice. This research aims to design and implement a hybrid sensory HAR system to provide more comprehensive, practical and accurate surveillance for older people to assist them living independently. This reseach: 1) designs and develops a hybrid HAR system which provides a spatio- temporal surveillance system for older people by combining the wrist-worn sensors and the room-mounted ambient sensors (passive infrared); the wearable data are used to recognize the defined specific daily activities, and the ambient information is used to infer the occupant’s room-level daily routine; 2): proposes a unique and effective data fusion method to hybridize the two-source sensory data, in which the captured room-level location information from the ambient sensors is also utilized to trigger the sub classification models pretrained by room-assigned wearable data; 3): implements augmented features which are extracted from the attitude angles of the wearable device and explores the contribution of the new features to HAR; 4:) proposes a feature selection (FS) method in the view of kernel canonical correlation analysis (KCCA) to maximize the relevance between the feature candidate and the target class labels and simultaneously minimizes the joint redundancy between the already selected features and the feature candidate, named mRMJR-KCCA; 5:) demonstrates all the proposed methods above with the ground-truth data collected from recruited participants in home settings. The proposed system has three function modes: 1) the pure wearable sensing mode (the whole classification model) which can identify all the defined specific daily activities together and function alone when the ambient sensing fails; 2) the pure ambient sensing mode which can deliver the occupant’s room-level daily routine without wearable sensing; and 3) the data fusion mode (room-based sub classification mode) which provides a more comprehensive and accurate surveillance HAR when both the wearable sensing and ambient sensing function properly. The research also applies the mutual information (MI)-based FS methods for feature selection, Support Vector Machine (SVM) and Random Forest (RF) for classification. The experimental results demonstrate that the proposed hybrid sensory system improves the recognition accuracy to 98.96% after applying data fusion using Random Forest (RF) classification and mRMJR-KCCA feature selection. Furthermore, the improved results are achieved with a much smaller number of features compared with the scenario of recognizing all the defined activities using wearable data alone. The research work conducted in the thesis is unique, which is not directly compared with others since there are few other similar existing works in terms of the proposed data fusion method and the introduced new feature set

    NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding

    Full text link
    Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding. [The dataset is available at: http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    Automatic identification of physical activity intensity and modality from the fusion of accelerometry and heart rate data

    Get PDF
    Background: Physical activity (PA) is essential to prevent and to treat a variety of chronic diseases. The automated detection and quantification of PA over time empowers lifestyle interventions, facilitating reliable exercise tracking and data-driven counseling. Methods: We propose and compare various combinations of machine learning (ML) schemes for the automatic classification of PA from multi-modal data, simultaneously captured by a biaxial accelerometer and a heart rate (HR) monitor. Intensity levels (low/moderate/vigorous) were recognized, as well as for vigorous exercise, its modality (sustained aerobic/resistance/mixed). In total, 178.63 h of data about PA intensity (65.55% low/18.96% moderate/15.49% vigorous) and 17.00 h about modality were collected in two experiments: one in free-living conditions, another in a fitness center under controlled protocols. The structure used for automatic classification comprised: a) definition of 42 time-domain signal features, b) dimensionality reduction, c) data clustering, and d) temporal filtering to exploit time redundancy by means of a Hidden Markov Model (HMM). Four dimensionality reduction techniques and four clustering algorithms were studied. In order to cope with class imbalance in the dataset, a custom performance metric was defined to aggregate recognition accuracy, precision and recall. Results: The best scheme, which comprised a projection through Linear Discriminant Analysis (LDA) and k-means clustering, was evaluated in leave-one-subject-out cross-validation; notably outperforming the standard industry procedures for PA intensity classification: score 84.65%, versus up to 63.60%. Errors tended to be brief and to appear around transients. Conclusions: The application of ML techniques for pattern identification and temporal filtering allowed to merge accelerometry and HR data in a solid manner, and achieved markedly better recognition performances than the standard methods for PA intensity estimation

    Detecting Falls with Wearable Sensors Using Machine Learning Techniques

    Get PDF
    Cataloged from PDF version of article.Falls are a serious public health problem and possibly life threatening for people in fall risk groups. We develop an automated fall detection system with wearable motion sensor units fitted to the subjects' body at six different positions. Each unit comprises three tri-axial devices (accelerometer, gyroscope, and magnetometer/compass). Fourteen volunteers perform a standardized set of movements including 20 voluntary falls and 16 activities of daily living (ADLs), resulting in a large dataset with 2520 trials. To reduce the computational complexity of training and testing the classifiers, we focus on the raw data for each sensor in a 4 s time window around the point of peak total acceleration of the waist sensor, and then perform feature extraction and reduction. Most earlier studies on fall detection employ rule-based approaches that rely on simple thresholding of the sensor outputs. We successfully distinguish falls from ADLs using six machine learning techniques (classifiers): the k-nearest neighbor (k-NN) classifier, least squares method (LSM), support vector machines (SVM), Bayesian decision making (BDM), dynamic time warping (DTW), and artificial neural networks (ANNs). We compare the performance and the computational complexity of the classifiers and achieve the best results with the k-NN classifier and LSM, with sensitivity, specificity, and accuracy all above 99%. These classifiers also have acceptable computational requirements for training and testing. Our approach would be applicable in real-world scenarios where data records of indeterminate length, containing multiple activities in sequence, are recorded

    Unsupervised Human Activity Recognition Using the Clustering Approach: A Review

    Get PDF
    Currently, many applications have emerged from the implementation of softwaredevelopment and hardware use, known as the Internet of things. One of the most importantapplication areas of this type of technology is in health care. Various applications arise daily inorder to improve the quality of life and to promote an improvement in the treatments of patients athome that suffer from different pathologies. That is why there has emerged a line of work of greatinterest, focused on the study and analysis of daily life activities, on the use of different data analysistechniques to identify and to help manage this type of patient. This article shows the result of thesystematic review of the literature on the use of the Clustering method, which is one of the mostused techniques in the analysis of unsupervised data applied to activities of daily living, as well asthe description of variables of high importance as a year of publication, type of article, most usedalgorithms, types of dataset used, and metrics implemented. These data will allow the reader tolocate the recent results of the application of this technique to a particular area of knowledg
    • …
    corecore