143 research outputs found

    Recognition and classification of human activities using wearable sensors

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2012.Thesis (Master's) -- Bilkent University, 2012.Includes bibliographical references.We address the problem of detecting and classifying human activities using two different types of wearable sensors. In the first part of the thesis, a comparative study on the different techniques of classifying human activities using tag-based radio-frequency (RF) localization is provided. Position data of multiple RF tags worn on the human body are acquired asynchronously and non-uniformly. Curves fitted to the data are re-sampled uniformly and then segmented. The effect of varying the relevant system parameters on the system accuracy is investigated. Various curve-fitting, segmentation, and classification techniques are compared and the combination resulting in the best performance is presented. The classifiers are validated through the use of two different cross-validation methods. For the complete classification problem with 11 classes, the proposed system demonstrates an average classification error of 8.67% and 21.30% for 5-fold and subject-based leave-one-out (L1O) cross validation, respectively. When the number of classes is reduced to five by omitting the transition classes, these errors become 1.12% and 6.52%. The system demonstrates acceptable classification performance despite that tag-based RF localization does not provide very accurate position measurements. In the second part, data acquired from five sensory units worn on the human body, each containing a tri-axial accelerometer, a gyroscope, and a magnetometer, during 19 different human activities are used to calculate inter-subject and interactivity variations in the data with different methods. Absolute, Euclidean, and dynamic time-warping (DTW) distances are used to assess the similarity of the signals. The comparisons are made using time-domain data and feature vectors. Different normalization methods are used and compared. The “best” subject is defined and identified according to his/her average distance to the other subjects.Based on one of the similarity criteria proposed here, an autonomous system that detects and evaluates physical therapy exercises using inertial sensors and magnetometers is developed. An algorithm that detects all the occurrences of one or more template signals (exercise movements) in a long signal (physical therapy session) while allowing some distortion is proposed based on DTW. The algorithm classifies the executions in one of the exercises and evaluates them as correct/incorrect, identifying the error type if there is any. To evaluate the performance of the algorithm in physical therapy, a dataset consisting of one template execution and ten test executions of each of the three execution types of eight exercise movements performed by five subjects is recorded, having totally 120 and 1,200 exercise executions in the training and test sets, respectively, as well as many idle time intervals in the test signals. The proposed algorithm detects 1,125 executions in the whole test set. 8.58% of the executions are missed and 4.91% of the idle intervals are incorrectly detected as an execution. The accuracy is 93.46% for exercise classification and 88.65% for both exercise and execution type classification. The proposed system may be used to both estimate the intensity of the physical therapy session and evaluate the executions to provide feedback to the patient and the specialist.Yurtman, ArasM.S

    Rehabilitation Exergames: use of motion sensing and machine learning to quantify exercise performance in healthy volunteers

    Get PDF
    Background: Performing physiotherapy exercises in front of a physiotherapist yields qualitative assessment notes and immediate feedback. However, practicing the exercises at home lacks feedback on how well or not patients are performing the prescribed tasks. The absence of proper feedback might result in patients doing the exercises incorrectly, which could worsen their condition. Objective: We propose the use of two machine learning algorithms, namely Dynamic Time Warping (DTW) and Hidden Markov Model (HMM), to quantitively assess the patient’s performance with respects to a reference. Methods: Movement data were recorded using a Kinect depth sensor, capable of detecting 25 joints in the human skeleton model, and were compared to those of a reference. 16 participants were recruited to perform four different exercises: shoulder abduction, hip abduction, lunge, and sit-to-stand. Their performance was compared to that of a physiotherapist as a reference. Results: Both algorithms show a similar trend in assessing participants' performance. However, their sensitivity level was different. While DTW was more sensitive to small changes, HMM captured a general view of the performance, being less sensitive to the details. Conclusions: The chosen algorithms demonstrated their capacity to objectively assess physical therapy performances. HMM may be more suitable in the early stages of a physiotherapy program to capture and report general performance, whilst DTW could be used later on to focus on the detail

    Wearable sensor technologies applied for post-stroke rehabilitation

    Get PDF
    Stroke is a common cerebrovascular disease that is recognized as one of the leading causes of death and ongoing disability around the globe. Stroke can lead to losses of various body functions depending on the affected area of the brain and leave significant impacts to the victim’s daily life. Post-stroke rehabilitation plays an important role in improving the life quality of stroke survivors. Properly designed rehabilitation training programs can not only prevent further functional deterioration, but also helps patients gradually regain their body functionalities. However, the delivery of rehabilitation service can be a complex and labour intensive task. In conventional rehabilitation systems, the chart-based ordinal scales are considered the dominant tools for impairment assessment and the administration of the scales primarily relies on the doctor’s manual observation. Measuring instruments such as strain gauge and force platforms can sometimes be used to collect quantitative evidence for some of the body functions such as grip strength and balance. However, the evaluation of the patients’ impairment level using ordinal scales still depend on the human interpretation of the data which can be both subjective and inefficient. The preferred scale and evaluation standard also vary among institutions across different regions which make the comparison of data difficult and sometimes unreliable. Furthermore, the intensive manual supervision and support required in rehabilitation training session limits the accessibility of the service as the regular visit to qualified hospital can be onerous for many patients and the associated cost can impose an enormous financial burden on both the government and the households. The situation can be even more challenging in developing countries due to higher growing rate of stroke population and more limited medical resources. The works presented in this thesis are focused on exploring the possibilities of integrating wearable sensor and pattern recognition techniques to improve the efficiency and the effectiveness of post-stroke rehabilitation by addressing the abovementioned issues. The study was initiated by a comprehensive literature review on the latest motion tracking technologies and non-visual based Inertia Measurement Unit (IMU) had been selected as the most suitable candidate for motion sensing in unsupervised training environment due to its low-cost and easy-to-operate characteristics. Following the design and construction of the 6-axis IMU based Body Area Network (BAN), a series of stroke patient motion data collection experiments had been conducted in conjunction with the Jiaxing 2nd Hospital Rehabilitation Centre in Zhejiang province, China. The collected motion samples were then investigated using various signal processing algorithms and pattern recognition techniques to achieve the three major objectives: automatic impairment level classification for reducing human effort involved in regular clinical assessment, single-index based limb mobility evaluation for providing objective evidence to support unified body function assessment standards, and training motion classification for enabling home or community based rehabilitation training with reduced supervision. At last, the study has been further expanded by incorporating surface Electromyography (sEMG) signal sampled during rehabilitation exercises as an alternative input to enhance accurate impairment level classification. The outcome of the investigations demonstrate that the wearable technology can play an important role within a tele-rehabilitation system by providing objective, accurate and often realtime indications of the recovery process as well as the assistance for training management

    Objective assessment of movement disabilities using wearable sensors

    Full text link
    The research presents a series of comprehensive analyses based on inertial measurements obtained from wearable sensors to quantitatively describe and assess human kinematic performance in certain tasks that are most related to daily life activities. This is not only a direct application of human movement analysis but also very pivotal in assessing the progression of patients undergoing rehabilitation services. Moreover, the detailed analysis will provide clinicians with greater insights to capture movement disorders and unique ataxic features regarding axial abnormalities which are not directly observed by the clinicians

    Spatiotemporal analysis of human actions using RGB-D cameras

    Get PDF
    Markerless human motion analysis has strong potential to provide cost-efficient solution for action recognition and body pose estimation. Many applications including humancomputer interaction, video surveillance, content-based video indexing, and automatic annotation among others will benefit from a robust solution to these problems. Depth sensing technologies in recent years have positively changed the climate of the automated vision-based human action recognition problem, deemed to be very difficult due to the various ambiguities inherent to conventional video. In this work, first a large set of invariant spatiotemporal features is extracted from skeleton joints (retrieved from depth sensor) in motion and evaluated as baseline performance. Next we introduce a discriminative Random Decision Forest-based feature selection framework capable of reaching impressive action recognition performance when combined with a linear SVM classifier. This approach improves upon the baseline performance obtained using the whole feature set with a significantly less number of features (one tenth of the original). The approach can also be used to provide insights on the spatiotemporal dynamics of human actions. A novel therapeutic action recognition dataset (WorkoutSU-10) is presented. We took advantage of this dataset as a benchmark in our tests to evaluate the reliability of our proposed methods. Recently the dataset has been published publically as a contribution to the action recognition community. In addition, an interactive action evaluation application is developed by utilizing the proposed methods to help with real life problems such as 'fall detection' in the elderly people or automated therapy program for patients with motor disabilities

    Generalized Activity Assessment computed fully distributed within a Wireless Body Area Network

    Get PDF
    Currently available wearables are usually based on a single sensor node with integrated capabilities for classifying different activities. The next generation of cooperative wearables could be able to identify not only activities, but also to evaluate them qualitatively using the data of several sensor nodes attached to the body, to provide detailed feedback for the improvement of the execution. Especially within the application domains of sports and health-care, such immediate feedback to the execution of body movements is crucial for (re-)learning and improving motor skills. To enable such systems for a broad range of activities, generalized approaches for human motion assessment within sensor networks are required. In this paper, we present a generalized trainable activity assessment chain (AAC) for the online assessment of periodic human activity within a wireless body area network. AAC evaluates the execution of separate movements of a prior trained activity on a fine-grained quality scale. We connect qualitative assessment with human knowledge by projecting the AAC on the hierarchical decomposition of motion performed by the human body as well as establishing the assessment on a kinematic evaluation of biomechanically distinct motion fragments. We evaluate AAC in a real-world setting and show that AAC successfully delimits the movements of correctly performed activity from faulty executions and provides detailed reasons for the activity assessment

    Generalized Activity Assessment computed fully distributed within a Wireless Body Area Network

    Get PDF
    Currently available wearables are usually based on a single sensor node with integrated capabilities for classifying different activities. The next generation of cooperative wearables could be able to identify not only activities, but also to evaluate them qualitatively using the data of several sensor nodes attached to the body, to provide detailed feedback for the improvement of the execution. Especially within the application domains of sports and health-care, such immediate feedback to the execution of body movements is crucial for (re-)learning and improving motor skills. To enable such systems for a broad range of activities, generalized approaches for human motion assessment within sensor networks are required. In this paper, we present a generalized trainable activity assessment chain (AAC) for the online assessment of periodic human activity within a wireless body area network. AAC evaluates the execution of separate movements of a prior trained activity on a fine-grained quality scale. We connect qualitative assessment with human knowledge by projecting the AAC on the hierarchical decomposition of motion performed by the human body as well as establishing the assessment on a kinematic evaluation of biomechanically distinct motion fragments. We evaluate AAC in a real-world setting and show that AAC successfully delimits the movements of correctly performed activity from faulty executions and provides detailed reasons for the activity assessment
    • …
    corecore