2 research outputs found
A Computer Vision Method for Estimating Velocity from Jumps
Athletes routinely undergo fitness evaluations to evaluate their training
progress. Typically, these evaluations require a trained professional who
utilizes specialized equipment like force plates. For the assessment, athletes
perform drop and squat jumps, and key variables are measured, e.g. velocity,
flight time, and time to stabilization, to name a few. However, amateur
athletes may not have access to professionals or equipment that can provide
these assessments. Here, we investigate the feasibility of estimating key
variables using video recordings. We focus on jump velocity as a starting point
because it is highly correlated with other key variables and is important for
determining posture and lower-limb capacity. We find that velocity can be
estimated with a high degree of precision across a range of athletes, with an
average R-value of 0.71 (SD = 0.06)
APE-V: athlete performance evaluation using video
2021 Fall.Includes bibliographical references.Athletes typically undergo regular evaluations by trainers and coaches to assess performance and injury risk. One of the most popular movements to examine is the vertical jump β a sport-independent means of assessing both lower extremity risk and power. Specifically, maximal effort countermovement and drop jumps performed on bilateral force plates provide a wealth of metrics; however, detailed evaluation of this movement requires specialized equipment (force plates) and trained experts to interpret results, limiting its use. Computer vision techniques applied to videos of such movements are a less expensive alternative for extracting such metrics. Blanchard et al. collected a dataset of 89 athletes performing these movements and showcased how OpenPose could be applied to the data. However, athlete error calls into question 46.2% of movements β in these cases, an expert assessor would have the athlete redo the movement to eliminate the error. Here, I augmented Blanchard et al. with expert labels of error and established benchmark performance on automatic error identification. In total, 14 different types of errors were identified by trained annotators. My benchmark models identified errors with an F1 score of 0.710 and a Kappa of 0.457 (Kappa measures accuracy over chance)