13 research outputs found

    Dog behaviour classification with movement sensors placed on the harness and the collar

    Get PDF
    Dog owners' understanding of the daily behaviour of their dogs may be enhanced by movement measurements that can detect repeatable dog behaviour, such as levels of daily activity and rest as well as their changes. The aim of this study was to evaluate the performance of supervised machine learning methods utilising accelerometer and gyroscope data provided by wearable movement sensors in classification of seven typical dog activities in a semi-controlled test situation. Forty-five middle to large sized dogs participated in the study. Two sensor devices were attached to each dog, one on the back of the dog in a harness and one on the neck collar. Altogether 54 features were extracted from the acceleration and gyroscope signals divided in two-second segments. The performance of four classifiers were compared using features derived from both sensor modalities. and from the acceleration data only. The results were promising; the movement sensor at the back yielded up to 91 % accuracy in classifying the dog activities and the sensor placed at the collar yielded 75 % accuracy at best. Including the gyroscope features improved the classification accuracy by 0.7-2.6 %, depending on the classifier and the sensor location. The most distinct activity was sniffing, whereas the static postures (lying on chest, sitting and standing) were the most challenging behaviours to classify, especially from the data of the neck collar sensor. The data used in this article as well as the signal processing scripts are openly available in Mendeley Data, https://doi.org/10.17632/vxhx934tbn.1.Peer reviewe

    Spectrum-Guided Adversarial Disparity Learning

    Full text link
    It has been a significant challenge to portray intraclass disparity precisely in the area of activity recognition, as it requires a robust representation of the correlation between subject-specific variation for each activity class. In this work, we propose a novel end-to-end knowledge directed adversarial learning framework, which portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity. Furthermore, the domain knowledge is incorporated in an unsupervised manner to guide the optimization and further boosts the performance. The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art. We further prove the effectiveness of automatic domain knowledge incorporation in performance enhancement

    Activity Recognition for Quality Assessment of Batting Shots in Cricket using a Hierarchical Representation

    Get PDF
    Quality assessment in cricket is a complex task that is performed by understanding the combination of individual activities a player is able to perform and by assessing how well these activities are performed. We present a framework for inexpensive and accessible, automated recognition of cricketing shots. By means of body-worn inertial measurement units, movements of batsmen are recorded, which are then analysed using a parallelised, hierarchical recognition system that automatically classifies relevant categories of shots as required for assessing batting quality. Our system then generates meaningful visualisations of key performance parameters, including feet positions, attack/defence, and distribution of shots around the ground. These visualisations are the basis for objective skill assessment thereby focusing on specific personal improvement points as identified through our system. We evaluated our framework through a deployment study where 6 players engaged in batting exercises. Based on the recorded movement data we could automatically identify 20 classes of unique batting shot components with an average F1-score greater than 88%. This analysis is the basis for our detailed analysis of our study participants’ skills. Our system has the potential to rival expensive vision-based systems but at a fraction of the cost

    Human behavior understanding for worker-centered intelligent manufacturing

    Get PDF
    “In a worker-centered intelligent manufacturing system, sensing and understanding of the worker’s behavior are the primary tasks, which are essential for automatic performance evaluation & optimization, intelligent training & assistance, and human-robot collaboration. In this study, a worker-centered training & assistant system is proposed for intelligent manufacturing, which is featured with self-awareness and active-guidance. To understand the hand behavior, a method is proposed for complex hand gesture recognition using Convolutional Neural Networks (CNN) with multiview augmentation and inference fusion, from depth images captured by Microsoft Kinect. To sense and understand the worker in a more comprehensive way, a multi-modal approach is proposed for worker activity recognition using Inertial Measurement Unit (IMU) signals obtained from a Myo armband and videos from a visual camera. To automatically learn the importance of different sensors, a novel attention-based approach is proposed to human activity recognition using multiple IMU sensors worn at different body locations. To deploy the developed algorithms to the factory floor, a real-time assembly operation recognition system is proposed with fog computing and transfer learning. The proposed worker-centered training & assistant system has been validated and demonstrated the feasibility and great potential for applying to the manufacturing industry for frontline workers. Our developed approaches have been evaluated: 1) the multi-view approach outperforms the state-of-the-arts on two public benchmark datasets, 2) the multi-modal approach achieves an accuracy of 97% on a worker activity dataset including 6 activities and achieves the best performance on a public dataset, 3) the attention-based method outperforms the state-of-the-art methods on five publicly available datasets, and 4) the developed transfer learning model achieves a real-time recognition accuracy of 95% on a dataset including 10 worker operations”--Abstract, page iv
    corecore