10 research outputs found

    Automatic equine activity detection by convolutional neural networks using accelerometer data

    Get PDF
    In recent years, with a widespread of sensors embedded in all kind of mobile devices, human activity analysis is occurring more often in several domains like healthcare monitoring and fitness tracking. This trend did also enter the equestrian world because monitoring behaviours can yield important information about the health and welfare of horses. In this research, a deep learning-based approach for activity detection of equines is proposed to classify seven activities based on accelerometer data. We propose using Convolutional Neural Networks (CNN) by which features are extracted automatically by using strong computing capabilities. Furthermore, we investigate the impact of the sampling frequency, the time series length and the type of underground on which the data is gathered on the recognition accuracy and evaluate the model on three types of experimental datasets that are compiled of labelled accelerometer data gathered from six different subjects performing seven different activities. Afterwards, a horse-wise cross validation is carried out to investigate the impact of the subjects themselves on the model recognition accuracy. Finally, a slightly adjusted model is validated on different amounts of 50 Hz sensor data. A 99% accuracy can be reached for detecting seven behaviours of a seen horse when the sampling rate is 25 Hz and the time interval is 2.1 s. Four behaviours of an unseen horse can be detected with the same accuracy when the sampling rate is 69 Hz and the time interval is 2.4 s. Moreover, the accuracy of the model for the three datasets decreased on average with about 4.75% when the sampling rate was decreased from 200 Hz to 25 Hz and with 5.27% when the time interval was decreased from 3 s to 0.6 s. In addition, the classification performance of the activity "walk" was not influenced by the type of underground the horse was performing this movement on and even the model could conclude from which underground the data was gathered for three out of four undergrounds with accuracies above 93% at time intervals higher than 1.2 s. This ensures the evaluation of activity patterns in real world circumstances. The performance and ability of the model to generalise is validated on 50 Hz data from different horse types, using ten-fold cross validation, reaching a mean classification accuracy of 97.84% and 96.10% when validated on a lame horse and pony, respectively. Moreover, in this work we show that using data from one sensors is at the cost of only 0.24% reduction in accuracy (99.42% vs 99.66%)

    Detection of tennis activities with wearable sensors

    Get PDF
    This paper aims to design and implement a system capable of distinguishing between different activities carried out during a tennis match. The goal is to achieve the correct classification of a set of tennis strokes. The system must exhibit robustness to the variability of the height, age or sex of any subject that performs the actions. A new database is developed to meet this objective. The system is based on two sensor nodes using Bluetooth Low Energy (BLE) wireless technology to communicate with a PC that acts as a central device to collect the information received by the sensors. The data provided by these sensors are processed to calculate their spectrograms. Through the application of innovative deep learning techniques with semi-supervised training, it is possible to carry out the extraction of characteristics and the classification of activities. Preliminary results obtained with a data set of eight players, four women and four men have shown that our approach is able to address the problem of the diversity of human constitutions, weight and sex of different players, providing accuracy greater than 96.5% to recognize the tennis strokes of a new player never seen before by the system

    High-Resolution Motor State Detection in Parkinson's Disease Using Convolutional Neural Networks

    Get PDF
    Patients with advanced Parkinson's disease regularly experience unstable motor states. Objective and reliable monitoring of these fluctuations is an unmet need. We used deep learning to classify motion data from a single wrist-worn IMU sensor recording in unscripted environments. For validation purposes, patients were accompanied by a movement disorder expert, and their motor state was passively evaluated every minute. We acquired a dataset of 8,661 minutes of IMU data from 30 patients, with annotations about the motor state (OFF,ON, DYSKINETIC) based on MDS-UPDRS global bradykinesia item and the AIMS upper limb dyskinesia item. Using a 1-minute window size as an input for a convolutional neural network trained on data from a subset of patients, we achieved a three-class balanced accuracy of 0.654 on data from previously unseen subjects. This corresponds to detecting the OFF, ON, or DYSKINETIC motor state at a sensitivity/specificity of 0.64/0.89, 0.67/0.67 and 0.64/0.89, respectively. On average, the model outputs were highly correlated with the annotation on a per subject scale (r = 0.83/0.84;p < 0.0001), and sustained so for the highly resolved time windows of 1 minute (r = 0.64/0.70;p < 0.0001). Thus, we demonstrate the feasibility of long-term motor-state detection in a free-living setting with deep learning using motion data from a single IMU

    Low-power neuromorphic sensor fusion for elderly care

    Get PDF
    Smart wearable systems have become a necessary part of our daily life with applications ranging from entertainment to healthcare. In the wearable healthcare domain, the development of wearable fall recognition bracelets based on embedded systems is getting considerable attention in the market. However, in embedded low-power scenarios, the sensor’s signal processing has propelled more challenges for the machine learning algorithm. Traditional machine learning method has a huge number of calculations on the data classification, and it is difficult to implement real-time signal processing in low-power embedded systems. In an embedded system, ensuring data classification in a low-power and real-time processing to fuse a variety of sensor signals is a huge challenge. This requires the introduction of neuromorphic computing with software and hardware co-design concept of the system. This thesis is aimed to review various neuromorphic computing algorithms, research hardware circuits feasibility, and then integrate captured sensor data to realise data classification applications. In addition, it has explored a human being benchmark dataset, which is following defined different levels to design the activities classification task. In this study, firstly the data classification algorithm is applied to human movement sensors to validate the neuromorphic computing on human activity recognition tasks. Secondly, a data fusion framework has been presented, it implements multiple-sensing signals to help neuromorphic computing achieve sensor fusion results and improve classification accuracy. Thirdly, an analog circuits module design to carry out a neural network algorithm to achieve low power and real-time processing hardware has been proposed. It shows a hardware/software co-design system to combine the above work. By adopting the multi-sensing signals on the embedded system, the designed software-based feature extraction method will help to fuse various sensors data as an input to help neuromorphic computing hardware. Finally, the results show that the classification accuracy of neuromorphic computing data fusion framework is higher than that of traditional machine learning and deep neural network, which can reach 98.9% accuracy. Moreover, this framework can flexibly combine acquisition hardware signals and is not limited to single sensor data, and can use multi-sensing information to help the algorithm obtain better stability

    Development of a Wearable Sensor-Based Framework for the Classification and Quantification of High Knee Flexion Exposures in Childcare

    Get PDF
    Repetitive cyclic and prolonged joint loading in high knee flexion postures has been associated with the progression of degenerative knee joint diseases and knee osteoarthritis (OA). Despite this association, high flexion postures, where the knee angle exceeds 120°, are commonly performed within occupational settings. While work related musculoskeletal disorders have been studied across many occupations, the risk of OA development associated with the adoption of high knee flexion postures in childcare workers has until recently been unexplored; and therefore, occupational childcare has not appeared in any systematic reviews seeking to prove a causal relationship between occupational exposures and the risk of knee OA development. Therefore, the overarching goal of this thesis was to explore the adoption of high flexion postures in childcare settings and to develop a means by which these could be measured using non-laboratory-based technologies. The global objectives of this thesis were to (i) identify the postural demands of occupational childcare as they relate to high flexion exposures at the knee, (ii) apply, extend, and validate sensor to segment alignment algorithms through which lower limb flexion-extension kinematics could be measured in multiple high knee flexion postures using inertial measurement units (IMUs), and (iii) develop a machine learning based classification model capable of identifying each childcare-inspired high knee flexion posture. In-line with these global objectives, four independent studies were conducted.   Study I – Characterization of Postures of High Knee Flexion and Lifting Tasks Associated with Occupational Childcare Background: High knee flexion postures, despite their association with increased incidences of osteoarthritis, are frequently adopted in occupational childcare. High flexion exposure thresholds (based on exposure frequency or cumulative daily exposure) that relate to increased incidences of OA have previously been proposed; yet our understanding of how the specific postural requirements of this childcare compare to these thresholds remains limited. Objectives: This study sought to define and quantify high flexion postures typically adopted in childcare to evaluate any increased likelihood of knee osteoarthritis development. Methods: Video data of eighteen childcare workers caring for infant, toddler, and preschool-aged children over a period of approximately 3.25 hours were obtained for this investigation from a larger cohort study conducted across five daycares in Kingston, Ontario, Canada. Each video was segmented to identify the start and end of potential high knee flexion exposures. Each identified posture was quantified by duration and frequency. An analysis of postural adoption by occupational task was subsequently performed to determine which task(s) might pose the greatest risk for cumulative joint trauma. Results: A total of ten postures involving varying degrees of knee flexion were identified, of which 8 involved high knee flexion. Childcare workers caring for children of all ages were found to adopt high knee flexion postures for durations of 1.45±0.15 hours and frequencies of 128.67±21.45 over the 3.25 hour observation period, exceeding proposed thresholds for incidences of knee osteoarthritis development. Structured activities, playing, and feeding tasks were found to demand the greatest adoption of high flexion postures. Conclusions: Based on the findings of this study, it is likely that childcare workers caring for children of all ages exceed cumulative exposure- and frequency-based thresholds associated with increased incidences of knee OA development within a typical working day. Study II – Evaluating the Robustness of Automatic IMU Calibration for Lower Extremity Motion Analysis in High Knee Flexion Postures Background: While inertial measurement units promise an out- of-the-box, minimally intrusive means of objectively measuring body segment kinematics in any setting, in practice their implementation requires complex calculations in order to align each sensor with the coordinate system of the segment to which they are attached. Objectives: This study sought to apply and extend previously proposed alignment algorithms to align inertial sensors with the segments on which they are attached in order to calculate flexion-extension angles for the ankle, knee, and hip during multiple childcare-inspired postures. Methods: The Seel joint axis algorithm and the Constrained Seel Knee Axis (CSKA) algorithm were implemented for the sensor to segment calibration of acceleration and angular velocity data from IMUs mounted on the lower limbs and pelvis, based on a series of calibration movements about each joint. Further, the Iterative Seel spherical axis (ISSA) extension to this implementation was proposed for the calibration of sensors about the ankle and hip. The performance of these algorithms was validated across fifty participants during ten childcare-inspired movements performed by comparing IMU- and gold standard optical-based flexion-extension angle estimates. Results: Strong correlations between the IMU- and optical-based angle estimates were reported for all joints during each high flexion motion with the exception of a moderate correlation reported for the ankle angle estimate during child chair sitting. Mean RMSE between protocols were found to be 6.61° ± 2.96° for the ankle, 7.55° ± 5.82° for the knee, and 14.64° ± 6.73° for the hip. Conclusions: The estimation of joint kinematics through the IMU-based CSKA and ISSA algorithms presents an effective solution for the sensor to segment calibration of inertial sensors, allowing for the calculation of lower limb flexion-extension kinematics in multiple childcare-inspired high knee flexion postures. Study III – A Multi-Dimensional Dynamic Time Warping Distance-Based Framework for the Recognition of High Knee Flexion Postures in Inertial Sensor Data Background: The interpretation of inertial measures as they relate to occupational exposures is non-trivial. In order to relate the continuously collected data to the activities or postures performed by the sensor wearer, pattern recognition and machine learning based algorithms can be applied. One difficulty in applying these techniques to real-world data lies in the temporal and scale variability of human movements, which must be overcome when seeking to classify data in the time-domain. Objectives: The objective of this study was to develop a sensor-based framework for the detection and measurement of isolated childcare-specific postures (identified in Study I). As a secondary objective, the classification accuracy movements performed under loaded and unloaded conditions were compared in order to assess the sensitivity of the developed model to potential postural variabilities accompanying the presence of a load. Methods: IMU-based joint angle estimates for the ankle, knee, and hip were time and scale normalized prior to being input to a multi-dimensional Dynamic Time Warping (DTW) distance-based Nearest Neighbour algorithm for the identification of twelve childcare inspired postures. Fifty participants performed each posture, when possible, under unloaded and loaded conditions. Angle estimates from thirty-five participants were divided into development and testing data, such that 80% of the trials were segmented into movement templates and the remaining 20% were left as continuous movement sequences. These data were then included in the model building and testing phases while the accuracy of the model was validated based on novel data from fifteen participants. Results: Overall accuracies of 82.3% and 55.6% were reached when classifying postures on testing and validation data respectively. When adjusting for the imbalances between classification groups, mean balanced accuracies increased to 86% and 74.6% for testing and validation data respectively. Sensitivity and specificity values revealed the highest rates of misclassifications occurred between flatfoot squatting, heels-up squatting, and stooping. It was also found that the model was not capable of identifying sequences of walking data based on a single step motion template. No significant differences were found between the classification of loaded and unloaded motion trials. Conclusions: A combination of DTW distances calculated between motion templates and continuous movement sequences of lower limb flexion-extension angles was found to be effective in the identification of isolated postures frequently performed in childcare. The developed model was successful at classifying data from participants both included and precluded from the algorithm building dataset and insensitive to postural variability which might be caused by the presence of a load. Study IV – Evaluating the Feasibility of Applying the Developed Multi-Dimensional Dynamic Time Warping Distance-Based Framework to the Measurement and Recognition of High Knee Flexion Postures in a Simulated Childcare Environment Background: While the simulation of high knee flexion postures in isolation (in Study III) provided a basis for the development of a multi-dimensional Dynamic Time Warping based nearest neighbour algorithm for the identification of childcare-inspired postures, it is unlikely that the postures adopted in childcare settings would be performed in isolation. Objectives: This study sought to explore the feasibility of extending the developed classification algorithm to identify and measure postures frequently adopted when performing childcare specific tasks within a simulated childcare environment. Methods: Lower limb inertial motion data was recorded from twelve participants as they interacted with their child during a series of tasks inspired by those identified in Study I as frequently occurring in childcare settings. In order to reduce the error associated with gyroscopic drift over time, joint angles for each trial were calculated over 60 second increments and concatenated across the duration of each trial. Angle estimates from ten participants were time windowed in order to create the inputs for the development and testing of two model designs wherein: (A) the model development data included all templates generated from Study III as well as continuous motion windows here collected, or (B) only the model development data included only windows of continuous motion data. The division of data into the development and testing datasets for each 5-fold cross-validated classification model was performed in one of two ways wherein the data was divided: (a) through stratified randomized partitioning of windows such that 80% were assigned to model development and the remaining 20% were reserved for testing, or (b) by partitioning all windows from a single trial of a single participant for testing while all remaining windows were assigned to the model development dataset. When the classification of continuously collected windows was tested (using division strategy b), a logic-based correction module was introduced to eliminate any erroneous predictions. Each model design (A and B) was developed and tested using both data division strategies (a and b) and subsequently their performance was evaluated based on the classification of all data windows from the two subjects reserved for validation. Results: Classification accuracies of 42.2% and 42.5% were achieved when classifying the testing data separated through stratified random partitioning (division strategy a) using models that included (model A, 159 classes) or excluded (model B, 149 classes) the templates generated from Study III, respectively. This classification accuracy was found to decrease when classifying a test partition which included all windows of a single trial (division strategy b) to 35.4% when using model A (where templates from Study III were included in the model development dataset); however, this same trial was classified with an accuracy of 80.8% when using model B (whose development dataset included only windows of continuous motion data). This accuracy was however found to be highly dependent on the motions performed in a given trial and logic-based corrections were not found to improve classification accuracies. When validating each model by identifying postures performed by novel subjects, classification accuracies of 24.0% and 26.6% were obtained using development data which included (model A) and excluded (model B) templates from Study III, respectively. Across all novel data, the highest classification accuracies were observed when identifying static postures, which is unsurprising given that windows of these postures were most prevalent in the model development datasets. Conclusions: While classification accuracies above those achievable by chance were achieved, the classification models evaluated in this study were incapable of accurately identifying the postures adopted during simulated childcare tasks to a level that could be considered satisfactory to accurately report on the postures assumed in a childcare environment. The success of the classifier was highly dependent on the number of transitions occurring between postures while in high flexion; therefore, more classifier development data is needed to create templates for these novel transition movements. Given the high variability in postural adoption when caring for and interacting with children, additional movement templates based on continuously collected data would be required for the successful identification of postures in occupational settings. Global Conclusions Childcare workers exceed previously reported thresholds for high knee flexion postures based on cumulative exposure and frequency of adoption associated with increased incidences of knee OA development within a typical working day. Inertial measurement units provide a unique means of objectively measuring postures frequently adopted when caring for children which may ultimately permit the quantification of high knee flexion exposures in childcare settings and further study of the relationship between these postures and the risk of OA development in occupational childcare. While the results of this thesis demonstrate that IMU based measures of lower limb kinematics can be used to identify these postures in isolation, further work is required to expand the classification model and enable the identification of such postures from continuously collected data

    A data fusion-based hybrid sensory system for older people’s daily activity recognition.

    Get PDF
    Population aged 60 and over is growing faster. Ageing-caused changes, such as physical or cognitive decline, could affect people’s quality of life, resulting in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) has become one of the most promising assistive technologies for older people’s daily life. Literature in HAR suggests that each sensor modality has its strengths and limitations and single sensor modalities may not cope with complex situations in practice. This research aims to design and implement a hybrid sensory HAR system to provide more comprehensive, practical and accurate surveillance for older people to assist them living independently. This reseach: 1) designs and develops a hybrid HAR system which provides a spatio- temporal surveillance system for older people by combining the wrist-worn sensors and the room-mounted ambient sensors (passive infrared); the wearable data are used to recognize the defined specific daily activities, and the ambient information is used to infer the occupant’s room-level daily routine; 2): proposes a unique and effective data fusion method to hybridize the two-source sensory data, in which the captured room-level location information from the ambient sensors is also utilized to trigger the sub classification models pretrained by room-assigned wearable data; 3): implements augmented features which are extracted from the attitude angles of the wearable device and explores the contribution of the new features to HAR; 4:) proposes a feature selection (FS) method in the view of kernel canonical correlation analysis (KCCA) to maximize the relevance between the feature candidate and the target class labels and simultaneously minimizes the joint redundancy between the already selected features and the feature candidate, named mRMJR-KCCA; 5:) demonstrates all the proposed methods above with the ground-truth data collected from recruited participants in home settings. The proposed system has three function modes: 1) the pure wearable sensing mode (the whole classification model) which can identify all the defined specific daily activities together and function alone when the ambient sensing fails; 2) the pure ambient sensing mode which can deliver the occupant’s room-level daily routine without wearable sensing; and 3) the data fusion mode (room-based sub classification mode) which provides a more comprehensive and accurate surveillance HAR when both the wearable sensing and ambient sensing function properly. The research also applies the mutual information (MI)-based FS methods for feature selection, Support Vector Machine (SVM) and Random Forest (RF) for classification. The experimental results demonstrate that the proposed hybrid sensory system improves the recognition accuracy to 98.96% after applying data fusion using Random Forest (RF) classification and mRMJR-KCCA feature selection. Furthermore, the improved results are achieved with a much smaller number of features compared with the scenario of recognizing all the defined activities using wearable data alone. The research work conducted in the thesis is unique, which is not directly compared with others since there are few other similar existing works in terms of the proposed data fusion method and the introduced new feature set
    corecore