279 research outputs found

    WSense: A Robust Feature Learning Module for Lightweight Human Activity Recognition

    Full text link
    In recent times, various modules such as squeeze-and-excitation, and others have been proposed to improve the quality of features learned from wearable sensor signals. However, these modules often cause the number of parameters to be large, which is not suitable for building lightweight human activity recognition models which can be easily deployed on end devices. In this research, we propose a feature learning module, termed WSense, which uses two 1D CNN and global max pooling layers to extract similar quality features from wearable sensor data while ignoring the difference in activity recognition models caused by the size of the sliding window. Experiments were carried out using CNN and ConvLSTM feature learning pipelines on a dataset obtained with a single accelerometer (WISDM) and another obtained using the fusion of accelerometers, gyroscopes, and magnetometers (PAMAP2) under various sliding window sizes. A total of nine hundred sixty (960) experiments were conducted to validate the WSense module against baselines and existing methods on the two datasets. The results showed that the WSense module aided pipelines in learning similar quality features and outperformed the baselines and existing models with a minimal and uniform model size across all sliding window segmentations. The code is available at https://github.com/AOige/WSense

    A quantitative comparison of Overlapping and Non-overlapping sliding windows effects for human activity recognition using inertial sensors

    Get PDF
    The sliding window technique is widely used to segment inertial sensor signals, i.e., accelerometers and gyroscopes, for activity recognition. In this technique, the sensor signals are partitioned into fix-sized time windows which can be of two types: (1) non-overlapping windows, in which time windows do not intersect, and (2) overlapping windows, in which they do. There is a generalized idea about the positive impact of using overlapping sliding windows on the performance of recognition systems in Human Activity Recognition. In this thesis, we analyze the impact of overlapping sliding windows on the performance of Human Activity Recognition systems with different evaluation techniques, namely subject-dependent cross validation and subject-independent cross validation. Our results show that the performance improvements regarding to overlapping windowing reported in the literature seem to be associated with the underlying limitations of subject-dependent cross validation. Furthermore, we do not observe any performance gain from the use of such technique in conjunction with subject-independent cross validation. We conclude that when using subject-independent cross validation, non-overlapping sliding windows reach the same performance as sliding windows. This result has significant implications on the resource usage for training the human activity recognition systems

    Evaluation of Wearable Sensor Tag Data Segmentation Approaches for Real Time Activity Classification in Elderly

    Get PDF
    Abstract. The development of human activity monitoring has allowed the creation of multiple applications, among them is the recognition of high falls risk activities of older people for the mitigation of falls occurrences. In this study, we apply a graphical model based classification technique(conditional random field) to evaluate various sliding window based techniques for the real time prediction of activities in older subjects wearing a passive (batteryless) sensor enabled RFID tag. The system achieved maximum overall real time activity prediction accuracy of 95% using a time weighted windowing technique to aggregate contextual information to input sensor data

    Action Recognition in Manufacturing Assembly using Multimodal Sensor Fusion

    Get PDF
    Production innovations are occurring faster than ever. Manufacturing workers thus need to frequently learn new methods and skills. In fast changing, largely uncertain production systems, manufacturers with the ability to comprehend workers\u27 behavior and assess their operation performance in near real-time will achieve better performance than peers. Action recognition can serve this purpose. Despite that human action recognition has been an active field of study in machine learning, limited work has been done for recognizing worker actions in performing manufacturing tasks that involve complex, intricate operations. Using data captured by one sensor or a single type of sensor to recognize those actions lacks reliability. The limitation can be surpassed by sensor fusion at data, feature, and decision levels. This paper presents a study that developed a multimodal sensor system and used sensor fusion methods to enhance the reliability of action recognition. One step in assembling a Bukito 3D printer, which composed of a sequence of 7 actions, was used to illustrate and assess the proposed method. Two wearable sensors namely Myo-armband captured both Inertial Measurement Unit (IMU) and electromyography (EMG) signals of assembly workers. Microsoft Kinect, a vision based sensor, simultaneously tracked predefined skeleton joints of them. The collected IMU, EMG, and skeleton data were respectively used to train five individual Convolutional Neural Network (CNN) models. Then, various fusion methods were implemented to integrate the prediction results of independent models to yield the final prediction. Reasons for achieving better performance using sensor fusion were identified from this study
    • …
    corecore