96 research outputs found

    Range of motion measurements based on depth camera for clinical rehabilitation

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia BiomédicaIn clinical rehabilitation, biofeedback increases the patient’s motivation which makes it one of the most effective motor rehabilitation mechanisms. In this field it is very helpful for the patient and even for the therapist to know the level of success and performance of the training process. The human motion tracking study can provide relevant information for this purpose. Existing lab-based Three-Dimensional (3D) motion capture systems are capable to provide this information in real-time. However, these systems still present some limitations when used in rehabilitation processes involving biofeedback. A new depth camera - the Microsoft KinectTM - was recently developed overcoming the limitations associated with the lab-based movement analysis systems. This depth camera is easy to use, inexpensive and portable. The aim of this work is to introduce a system in clinical practice to do Range of Motion(ROM) measurements, using the KinectTM sensor and providing real-time biofeedback. For this purpose, the ROM measurements were computed using the joints spatial coordinates provided by the official Microsoft KinectTM Software Development Kit (SDK)and also using our own developed algorithm. The obtained results were compared with a triaxial accelerometer data, used as reference. The upper movements studied were abduction, flexion/extension and internal/external rotation with the arm at 90 degrees of elevation. With our algorithm the Mean Error (ME) was less than 1.5 degrees for all movements. Only in abduction the KinectTM Sketelon Tracking obtained comparable data. In other movements the ME increased an order of magnitude. Given the potential benefits, our method can be a useful tool for ROM measurements in clinics

    Master of Science

    Get PDF
    thesisStroke is a leading cause of death and adult disability in the United States. Survivors lose abilities that were controlled by the affected area of the brain. Rehabilitation therapy is administered to help survivors regain control of lost functional abilities. The number of sessions that stroke survivors attend are limited to the availability of a clinic close to their residence and the amount of time friends and family can devote to help them commute, as most are incapable of driving. Home-based therapy using virtual reality and computer games have the potential of solving these issues, increasing the amount of independent therapy performed by patients. This thesis presents the design, development and testing of a low-cost system, potentially suitable for use in the home environment. This system is designed for rehabilitation of the impaired upper limb of stroke survivors. A Microsoft Kinect was used to track the position of the patient's hand and the game requires the user to move the arm over increasing large areas by sliding the arm on a support. Studies were performed with six stroke survivors and five control subjects to determine the feasibility of the system. Patients played the game for 6 to 10 days and their game scores, range of motion and Fugl-Meyer scores were recorded for analysis. Statistically significant (p<0.05) differences were found between the game scores of the first and last day of the study. Furthermore, acceptability surveys revealed patients enjoyed playing the game, found this kind of therapy more enjoyable than conventional therapy and were willing to use the system in the home environment. Future work in the system will be focused on larger studies, improving the comfort of patients while playing the game, and developing new games that address cognitive issues and integrate art and therapy

    Automatic behavior recognition in laboratory animals using kinect

    Get PDF
    Tese de Mestrado Integrado. Bioengenharia. Faculdade de Engenharia. Universidade do Porto. 201

    INCORPORATING MACHINE VISION IN PRECISION DAIRY FARMING TECHNOLOGIES

    Get PDF
    The inclusion of precision dairy farming technologies in dairy operations is an area of increasing research and industry direction. Machine vision based systems are suitable for the dairy environment as they do not inhibit workflow, are capable of continuous operation, and can be fully automated. The research of this dissertation developed and tested 3 machine vision based precision dairy farming technologies tailored to the latest generation of RGB+D cameras. The first system focused on testing various imaging approaches for the potential use of machine vision for automated dairy cow feed intake monitoring. The second system focused on monitoring the gradual change in body condition score (BCS) for 116 cows over a nearly 7 month period. Several proposed automated BCS systems have been previously developed by researchers, but none have monitored the gradual change in BCS for a duration of this magnitude. These gradual changes infer a great deal of beneficial and immediate information on the health condition of every individual cow being monitored. The third system focused on automated dairy cow feature detection using Haar cascade classifiers to detect anatomical features. These features included the tailhead, hips, and rear regions of the cow body. The features chosen were done so in order to aid machine vision applications in determining if and where a cow is present in an image or video frame. Once the cow has been detected, it must then be automatically identified in order to keep the system fully automated, which was also studied in a machine vision based approach in this research as a complimentary aspect to incorporate along with cow detection. Such systems have the potential to catch poor health conditions developing early on, aid in balancing the diet of the individual cow, and help farm management to better facilitate resources, monetary and otherwise, in an appropriate and efficient manner. Several different applications of this research are also discussed along with future directions for research, including the potential for additional automated precision dairy farming technologies, integrating many of these technologies into a unified system, and the use of alternative, potentially more robust machine vision cameras

    Using the Microsoft Kinect to assess human bimanual coordination

    Get PDF
    Optical marker-based systems are the gold-standard for capturing three-dimensional (3D) human kinematics. However, these systems have various drawbacks including time consuming marker placement, soft tissue movement artifact, and are prohibitively expensive and non-portable. The Microsoft Kinect is an inexpensive, portable, depth camera that can be used to capture 3D human movement kinematics. Numerous investigations have assessed the Kinect\u27s ability to capture postural control and gait, but to date, no study has evaluated it\u27s capabilities for measuring spatiotemporal coordination. In order to investigate human coordination and coordination stability with the Kinect, a well-studied bimanual coordination paradigm (Kelso, 1984, Kelso; Scholz, & Schöner, 1986) was adapted. ^ Nineteen participants performed ten trials of coordinated hand movements in either in-phase or anti-phase patterns of coordination to the beat of a metronome which was incrementally sped up and slowed down. Continuous relative phase (CRP) and the standard deviation of CRP were used to assess coordination and coordination stability, respectively.^ Data from the Kinect were compared to a Vicon motion capture system using a mixed-model, repeated measures analysis of variance and intraclass correlation coefficients (2,1) (ICC(2,1)).^ Kinect significantly underestimated CRP for the the anti-phase coordination pattern (p \u3c.0001) and overestimated the in-phase pattern (p\u3c.0001). However, a high ICC value (r=.097) was found between the systems. For the standard deviation of CRP, the Kinect exhibited significantly higher variability than the Vicon (p \u3c .0001) but was able to distinguish significant differences between patterns of coordination with anti-phase variability being higher than in-phase (p \u3c .0001). Additionally, the Kinect was unable to accurately capture the structure of coordination stability for the anti-phase pattern. Finally, agreement was found between systems using the ICC (r=.37).^ In conclusion, the Kinect was unable to accurately capture mean CRP. However, the high ICC between the two systems is promising and the Kinect was able to distinguish between the coordination stability of in-phase and anti-phase coordination. However, the structure of variability as movement speed increased was dissimilar to the Vicon, particularly for the anti-phase pattern. Some aspects of coordination are nicely captured by the Kinect while others are not. Detecting differences between bimanual coordination patterns and the stability of those patterns can be achieved using the Kinect. However, researchers interested in the structure of coordination stability should exercise caution since poor agreement was found between systems

    The use of consumer depth cameras for calculating body segment parameters.

    Get PDF
    Body segment parameters (BSPs) are pivotal to a number of key analyses within sports and healthcare. Accuracy is paramount, as investigations have shown small errors in BSPs to have significant impact upon subsequent analyses, particularly when analysing the dynamics of high acceleration movements. There are many techniques with which to estimate BSPs, however, the majority are complex, time consuming, and make large assumptions about the underlying structure of the human body, leading to considerable errors. Interest is increasingly turning towards obtaining person-specific BSPs from 3D scans, however, the majority of current scanning systems are expensive, complex, require skilled operators, and require lengthy post processing of the captured data. The purpose of this study was to develop a low cost 3D scanning system capable of estimating accurate and reliable person-specific segmental volume, forming a fundamental first step towards calculation of the full range of BSPs.A low cost 3D scanning system was developed, comprising four Microsoft Kinect RGB-D sensors, and capable of estimating person-specific segmental volume in a scanning operation taking less than one second. Individual sensors were calibrated prior to first use, overcoming inherent distortion of the 3D data. Scans from each of the sensors were aligned with one another via an initial extrinsic calibration process, producing 360&deg; colour rendered 3D scans. A scanning protocol was developed, designed to limit movement due to postural sway and breathing throughout the scanning operation. Scans were post processed to remove discontinuities at edges, and parameters of interest calculated using a combination of manual digitisation and automated algorithms.The scanning system was validated using a series of geometric objects representative of human body segments, showing high reliability and systematic over estimation of scan-derived measurements. Scan-derived volumes of living human participants were also compared to those calculated using a typical geometric BSP model. Results showed close agreement, however, absolute differences could not be quantified owing to the lack of gold standard data. The study suggests the scanning system would be well received by practitioners, offering many advantages over current techniques. However, future work is required to further characterise the scanning system's absolute accuracy

    Computational Modeling of Facial Response for Detecting Differential Traits in Autism Spectrum Disorders

    Get PDF
    This dissertation proposes novel computational modeling and computer vision methods for the analysis and discovery of differential traits in subjects with Autism Spectrum Disorders (ASD) using video and three-dimensional (3D) images of face and facial expressions. ASD is a neurodevelopmental disorder that impairs an individual’s nonverbal communication skills. This work studies ASD from the pathophysiology of facial expressions which may manifest atypical responses in the face. State-of-the-art psychophysical studies mostly employ na¨ıve human raters to visually score atypical facial responses of individuals with ASD, which may be subjective, tedious, and error prone. A few quantitative studies use intrusive sensors on the face of the subjects with ASD, which in turn, may inhibit or bias the natural facial responses of these subjects. This dissertation proposes non-intrusive computer vision methods to alleviate these limitations in the investigation for differential traits from the spontaneous facial responses of individuals with ASD. Two IRB-approved psychophysical studies are performed involving two groups of age-matched subjects: one for subjects diagnosed with ASD and the other for subjects who are typically-developing (TD). The facial responses of the subjects are computed from their facial images using the proposed computational models and then statistically analyzed to infer about the differential traits for the group with ASD. A novel computational model is proposed to represent the large volume of 3D facial data in a small pose-invariant Frenet frame-based feature space. The inherent pose-invariant property of the proposed features alleviates the need for an expensive 3D face registration in the pre-processing step. The proposed modeling framework is not only computationally efficient but also offers competitive performance in 3D face and facial expression recognition tasks when compared with that of the state-ofthe-art methods. This computational model is applied in the first experiment to quantify subtle facial muscle response from the geometry of 3D facial data. Results show a statistically significant asymmetry in specific pair of facial muscle activation (p\u3c0.05) for the group with ASD, which suggests the presence of a psychophysical trait (also known as an ’oddity’) in the facial expressions. For the first time in the ASD literature, the facial action coding system (FACS) is employed to classify the spontaneous facial responses based on facial action units (FAUs). Statistical analyses reveal significantly (p\u3c0.01) higher prevalence of smile expression (FAU 12) for the ASD group when compared with the TD group. The high prevalence of smile has co-occurred with significantly averted gaze (p\u3c0.05) in the group with ASD, which is indicative of an impaired reciprocal communication. The metric associated with incongruent facial and visual responses suggests a behavioral biomarker for ASD. The second experiment shows a higher prevalence of mouth frown (FAU 15) and significantly lower correlations between the activation of several FAU pairs (p\u3c0.05) in the group with ASD when compared with the TD group. The proposed computational modeling in this dissertation offers promising biomarkers, which may aid in early detection of subtle ASD-related traits, and thus enable an effective intervention strategy in the future

    Marker-less motion capture for biomechanical analysis using the Kinect sensor

    Get PDF
    Motion capture systems are gaining more and more importance in different fields of research. In the field of biomechanics, marker-based systems have always been used as an accurate and precise method to capture motion. However, attaching markers on the subject is a time-consuming and laborious method. As a consequence, this problem has given rise to a new concept of motion capture based on marker-less systems. By means of these systems, motion can be recorded without attaching any markers to the skin of the subject and capturing colour-depth data of the subject in movement. The current thesis has researched on marker-less motion capture using the Kinect sensor, and has compared the two motion capture systems, marker-based and marker-less, by analysing the results of several captured motions. In this thesis, two takes have been recorded and only motion of the pelvis and lower limb segments have been analysed. The methodology has consisted of capturing the motions using the marker-based and marker-less systems simultaneously and then processing the data by using specific software. At the end, the angles of hip flexion, hip adduction, knee and ankle obtained through the two systems have been compared. In order to obtain the three-dimensional joint angles using the marker-less system, a new software named iPi Soft has been introduced to process the data from the Kinect sensor. Finally, the results of two systems have been compared and thoroughly discussed, so as to assess the accuracy of the Kinect system
    • …
    corecore