8 research outputs found
ASSESSMENT OF KINEMATIC CMJ DATA USING A DEEP LEARNING ALGORITHM-BASED MARKERLESS MOTION CAPTURE SYSTEM
The purpose of this study was to compare the performance of a 2D video-based markerless motion capture system to a conventional marker-based approach during a counter movement jump (CMJ). Twenty-three healthy participants performed CMJ while data were collected simultaneously via a marker-based (Oqus) and a 2D video-based motion capture system (Miqus, both: Qualisys AB, Gothenburg, Sweden). The 2D video data was further processed using Theia3D (Theia Markerless Inc.), both sets of data were analysed concurrently in Visual3D (C-motion, Inc). Excellent agreement between systems with ICCs \u3e0.988 exists for Jump height (mean average error of 0.35 cm) and ankle and knee sagittal plane angles (RMS differences \u3c 5°). The hip joint showed highe
ASSESSMENT OF KINEMATIC CMJ DATA USING A DEEP LEARNING ALGORITHM-BASED MARKERLESS MOTION CAPTURE SYSTEM
The purpose of this study was to compare the performance of a video-based markerless motion capture system to a conventional marker-based approach during a counter movement jump (CMJ). Twenty-three healthy participants performed CMJ while data was collected simultaneously via a marker-based (Oqus) and a 2D video-based motion capture system (Miqus, both: Qualisys). The video data was further processed to 3D-data using Theia3D (Theia Markerless Inc.). Excellent agreement between systems with ICCs \u3e0.99 exists for jump height (mean average error of -0.27 cm) and sagittal ankle and knee plane angles (RMSD \u3c 5°). The hip joint showed an average RMSD of 21° with a strong correlation of 0.80. As such the markerless system is capable of detecting jump height, sagittal ankle and knee joint angles and 3D joint positions of a CMJ to a high accurac
Using the Microsoft Kinect to assess human bimanual coordination
Optical marker-based systems are the gold-standard for capturing three-dimensional (3D) human kinematics. However, these systems have various drawbacks including time consuming marker placement, soft tissue movement artifact, and are prohibitively expensive and non-portable. The Microsoft Kinect is an inexpensive, portable, depth camera that can be used to capture 3D human movement kinematics. Numerous investigations have assessed the Kinect\u27s ability to capture postural control and gait, but to date, no study has evaluated it\u27s capabilities for measuring spatiotemporal coordination. In order to investigate human coordination and coordination stability with the Kinect, a well-studied bimanual coordination paradigm (Kelso, 1984, Kelso; Scholz, & Schöner, 1986) was adapted. ^ Nineteen participants performed ten trials of coordinated hand movements in either in-phase or anti-phase patterns of coordination to the beat of a metronome which was incrementally sped up and slowed down. Continuous relative phase (CRP) and the standard deviation of CRP were used to assess coordination and coordination stability, respectively.^ Data from the Kinect were compared to a Vicon motion capture system using a mixed-model, repeated measures analysis of variance and intraclass correlation coefficients (2,1) (ICC(2,1)).^ Kinect significantly underestimated CRP for the the anti-phase coordination pattern (p \u3c.0001) and overestimated the in-phase pattern (p\u3c.0001). However, a high ICC value (r=.097) was found between the systems. For the standard deviation of CRP, the Kinect exhibited significantly higher variability than the Vicon (p \u3c .0001) but was able to distinguish significant differences between patterns of coordination with anti-phase variability being higher than in-phase (p \u3c .0001). Additionally, the Kinect was unable to accurately capture the structure of coordination stability for the anti-phase pattern. Finally, agreement was found between systems using the ICC (r=.37).^ In conclusion, the Kinect was unable to accurately capture mean CRP. However, the high ICC between the two systems is promising and the Kinect was able to distinguish between the coordination stability of in-phase and anti-phase coordination. However, the structure of variability as movement speed increased was dissimilar to the Vicon, particularly for the anti-phase pattern. Some aspects of coordination are nicely captured by the Kinect while others are not. Detecting differences between bimanual coordination patterns and the stability of those patterns can be achieved using the Kinect. However, researchers interested in the structure of coordination stability should exercise caution since poor agreement was found between systems
The Influence Of Sex And Body Size On The Validity Of The Microsoft Kinect For Measuring Knee Motion During Landing
Measuring knee motion during landing is a method to evaluate knee injury risk. Three-dimensional (3D) motion capture is inaccessible, and the Microsoft Kinect is an alternative to measure knee motion. The primary objective was to evaluate the influence of sex and body size on the validity of the Kinect to measure knee motion during landing. A secondary objective was to compare knee motion between females and males with high and low body mass index (BMI). We assessed frontal plane knee kinematics of 40 (10 per group of females and males with high and low BMI) participants during landing with the Kinect and 3D motion capture. Good agreement between methods was found for the knee ankle separation ratio across groups, but there was low agreement between methods for measuring knee abduction. The high BMI group regardless of sex had more knee abduction than the low BMI group when measured with motion capture
Recommended from our members
Multi-sensor physical activity measurement in early childhood
The purpose of this dissertation was to develop, validate, and implement multi-sensor approaches for measuring physical activity and social/contextual covariates in 2-5 year-old children via wearable-, wireless communication-, and infrared-depth camera-based technologies. In Chapter 2, a three-phased study design was used to validate a method for estimating metered distances between wearable devices using accelerometer-derived Bluetooth signals. Results showed that distances, up to 20 meters, can be predicted between a single Bluetooth beacon and receiver using a Random Forest algorithm. When multiple Bluetooth beacons and receivers were used within the same environment, a moving average filter was required to recover observations lost due to noise. Overall, simulation and validation data suggest that accelerometer-derived Bluetooth signals can be used in studies of physical activity co-participation to 1) estimate metered distances between devices using a single beacon-receiver paradigm, as well as to 2) estimate the proportion of time that devices are proximal when using multiple beacons and receivers. Chapter 3 characterized the relationship between objectively measured physical activity and dyadic spatial proximities in 2 year-olds and their parents. Data revealed that the overall proportions of time that children and their parents spent in total physical activity were positively associated, and time series data revealed that this relationship remained consistent when analyzed hour-to-hour. Time spent engaged in sedentary behavior was also positively associated between children and parents; however, there was no association between child and parent moderate-vigorous physical activity volumes. Dyadic proximity results showed that girls spent more time in joint physical activity with their mothers than boys. Furthermore, children who engaged in >60 minutes of daily moderate-vigorous physical activity spent an additional 30 minutes in joint total physical activity with their mothers each day, on average, when compared to children who engaged in 60 minutes of daily moderate-vigorous physical activity participated in joint physical activity with their mothers across wider relative distances, on average, than did boys who engaged in physical activity at closer relative distances to their mothers. In Chapter 4, an original computer vision algorithm was applied to infrared-depth camera data for the purpose of converting three-dimensional videos into triaxial physical activity signals in young children. Physical activity data were collected in 2-5 year-old children during 20-minute semi-structured, indoor child-parent dyadic play sessions. Play session video data were converted into triaxial physical activity signals using a multi-phased computer vision algorithm for each child. Computer vision-derived triaxial physical activity cut points for 2-5 year-olds were calibrated against a direct observation reference system using a machine learning algorithm. Results revealed that triaxial activity signals, as measured by a dual-sensor camera, can be used to estimate both physical activity intensities and volumes in young children without the use of wearable technology. Collectively, these studies show that multi-sensor approaches to physical activity measurement are a valid means by which to measure physical activity and social/contextual covariates in young children using either wearable sensors or computer vision