11 research outputs found

    Multidimensional ground reaction forces and moments from wearable sensor accelerations via deep learning

    Get PDF
    Objective: Monitoring athlete internal workload exposure, including prevention of catastrophic non-contact knee injuries, relies on the existence of a custom early-warning detection system. This system must be able to estimate accurate, reliable, and valid musculoskeletal joint loads, for sporting maneuvers in near real-time and during match play. However, current methods are constrained to laboratory instrumentation, are labor and cost intensive, and require highly trained specialist knowledge, thereby limiting their ecological validity and volume deployment. Methods: Here we show that kinematic data obtained from wearable sensor accelerometers, in lieu of embedded force platforms, can leverage recent supervised learning techniques to predict in-game near real-time multidimensional ground reaction forces and moments (GRF/M). Competing convolutional neural network (CNN) deep learning models were trained using laboratory-derived stance phase GRF/M data and simulated sensor accelerations for running and sidestepping maneuvers derived from nearly half a million legacy motion trials. Then, predictions were made from each model driven by five sensor accelerations recorded during independent inter-laboratory data capture sessions. Results: Despite adversarial conditions, the proposed deep learning workbench achieved correlations to ground truth, by GRF component, of vertical 0.9663, anterior 0.9579 (both running), and lateral 0.8737 (sidestepping). Conclusion: The lessons learned from this study will facilitate the use of wearable sensors in conjunction with deep learning to accurately estimate near real-time on-field GRF/M. Significance: Coaching, medical, and allied health staff can use this technology to monitor a range of joint loading indicators during game play, with the ultimate aim to minimize the occurrence of non-contact injuries in elite and community-level sports

    ESTIMATION OF LOWER LIMB KINETICS FROM LANDMARKS DURING SIDESTEPPING VIA ARTIFICIAL NEURAL NETWORKS

    Get PDF
    The purpose of this study was to determine the validity of kinetics estimated from 3D coordinates of landmarks during sidestepping by artificial neural networks (ANN). 71 male college professional soccer athletes performed sidestepping with two directions (left and right) and two cutting angles (45° and 90°) 3times for every task, totally 12 times. Coordinates of reflective markers, ground reaction forces (GRF) and lower limb joint moments were measured. All 18 body landmarks such as joints center were obtained by reflective markers as inputs to estimate GRF and lower joint moments in the ANN whose type was multilayer perceptron. The most of kinetics estimated by ANN showed strong correlation(r\u3e0.9) with measured results. Just few kinetic curves of ANN existed significant differences in a few time points compared to measured results. ANN could accurately estimate kinetics from the coordinates of body landmarks druing sidestepping

    PREDICTING 3D GROUND REACTION FORCE FROM 2D VIDEO VIA NEURAL NETWORKS IN SIDESTEPPING TASKS

    Get PDF
    Sports science practitioners often measure ground reaction forces (GRFs) to assess performance, rehabilitation and injury risk. However, recording of GRFs during dynamic tasks has historically been limited to lab settings. This work aims to use neural networks (NN) to predict three-dimensional (3D) GRF via pose estimation keypoints as inputs, determined from 2D video data. Two different NN were trained on a dataset containing 1474 samples from 14 participants and their prediction accuracy compared with ground truth force data. Results for both NN showed correlation coefficients ranging from 0.936 to 0.954 and normalised root mean square errors from 11.05% to 13.11% for anterior-posterior and vertical GRFs, with poorer results found in the medio-lateral direction. This study demonstrates the feasibility and utility of predicting GRFs from 2D video footage

    CREATING VIRTUAL FORCE PLATFORMS FOR CUTTING MANEUVERS FROM KINEMATIC DATA BASED ON LSTM NEURAL NETWORKS

    Get PDF
    The precise measurement of ground reaction forces and moments (GRF/M) usually requires stationary equipment and is, therefore, only partly feasible for field measurements. In this work we propose a method to derive GRF/M time series from motion capture marker trajectories for cutting maneuvers (CM) using a long short-term memory (LSTM) neural network. We used a dataset containing 637 CM motion files from 70 participants and trained two-layer LSTM neural networks to predict the GRF/M signals of two force platforms. A five-fold cross-validation resulted in correlation coefficients ranging from 0.870 to 0.977 and normalized root mean square errors from 3.51 to 9.99% between predicted and measured GRF/M. In future, this method can be used not only to simplify lab measurements but also to allow for determining biomechanical parameters during real-world situations

    Both a single sacral marker and the whole-body center of mass accurately estimate peak vertical ground reaction force in running.

    Get PDF
    While running, the human body absorbs repetitive shocks with every step. These shocks can be quantified by the peak vertical ground reaction force (F <sub>v,max</sub> ). To measure so, using a force plate is the gold standard method (GSM), but not always at hand. In this case, a motion capture system might be an alternative if it accurately estimates F <sub>v,max</sub> . The purpose of this study was to estimate F <sub>v,max</sub> based on motion capture data and validate the obtained estimates with force plate-based measures. One hundred and fifteen runners participated at this study and ran at 9, 11, and 13 km/h. Force data (1000 Hz) and whole-body kinematics (200 Hz) were acquired with an instrumented treadmill and an optoelectronic system, respectively. The vertical ground reaction force was reconstructed from either the whole-body center of mass (COM-M) or sacral marker (SACR-M) accelerations, calculated as the second derivative of their respective positions, and further low-pass filtered using several cutoff frequencies (2-20 Hz) and a fourth-order Butterworth filter. The most accurate estimations of F <sub>v,max</sub> were obtained using 5 and 4 Hz cutoff frequencies for the filtering of COM and sacral marker accelerations, respectively. GSM, COM-M, and SACR-M were not significantly different at 11 km/h but were at 9 and 13 km/h. The comparison between GSM and COM-M or SACR-M for each speed depicted root mean square error (RMSE) smaller or equal to 0.17BW (≤6.5 %) and no systematic bias at 11 km/h but small systematic biases at 9 and 13 km/h (≤0.09 BW). COM-M gave systematic biases three times smaller than SACR-M and two times smaller RMSE. The findings of this study support the use of either COM-M or SACR-M using data filtered at 5 and 4 Hz, respectively, to estimate F <sub>v,max</sub> during level treadmill runs at endurance speeds

    Indirect Estimation of Vertical Ground Reaction Force from a Body-Mounted INS/GPS Using Machine Learning

    Get PDF
    Vertical ground reaction force(vGRF)can be measured by forceplates or instrumented treadmills, but their application is limited to indoor environments. Insoles remove this restriction but suffer from low durability (several hundred hours). Therefore, interest in the indirect estimation of vGRF using inertial measurement units and machine learning techniques has increased. This paper presents a methodology for indirectly estimating vGRF and other features used in gait analysis from measurements of a wearable GPS-aided inertial navigation system (INS/GPS) device. A set of 27 features was extracted from the INS/GPS data. Feature analysis showed that six of these features suffice to provide precise estimates of 11 different gait parameters. Bagged ensembles of regression trees were then trained and used for predicting gait parameters for a dataset from the test subject from whom the training data were collected and for a dataset from a subject for whom no training data were available. The prediction accuracies for the latter were significantly worse than for the first subject but still sufficiently good. K-nearest neighbor (KNN) and long short-term memory (LSTM) neural networks were then used for predicting vGRF and ground contact times. The KNN yielded a lower normalized root mean square error than the neural network for vGRF predictions but cannot detect new patterns in force curves.publishedVersionPeer reviewe

    Eficiencia del Deep Learning para el modelado de la fuerza de reacción durante la marcha

    Get PDF
    La medición de las fuerzas de reacción con el suelo durante la marcha humana de un paciente es una herramienta esencial para la disminución, prevención y rehabilitación de lesiones. El objetivo de este trabajo ha sido el de evaluar la eficiencia del Deep Learning implementado en Matlab 2021a para el modelado de las fuerzas de reacción con el suelo a partir de mediciones de los ángulos de flexión de la rodilla, del tobillo y de la cadera. Las mediciones fueron proporcionadas por el laboratorio de biomecánica de la Universidad de Sevilla, el cual desarrolló un ensayo basado en el protocolo Plug-in-Gait (PiG), el cual está limitado a la medición en un espacio cerrado sobre placas de fuerza. Los resultados del estudio mostraron resultados superiores para el entrenamiento de redes Long-Short-Term-Memory (LSTM) de varias capas. Por ende, la simulación en Deep Learning basada en estructuras LSTM es una herramienta prometedora para el modelado de la dinámica de la marcha eliminando las limitaciones de laboratorio.The measurement of ground reaction forces during human gait is an essential tool for injury reduction, prevention and rehabilitation. The aim of this project has been to evaluate the efficiency of Deep Learning, implemented in Matlab 2021a, for the modeling of ground reaction forces from measurements of knee, ankle and hip flexion angles. The database was taken from the biomechanics laboratory of the University of Seville, which developed a Plug-in-Gait (PiG) based experiment, which is limited to measurement in an indoorspace on force plates. The results of the study showed superior results for the training of multi-layer Long-Short-TermMemory (LSTM) networks. Thus, Deep Learning simulation based on LSTM structures is a promising tool for modeling gait dynamics by eliminating laboratory limitations.Universidad de Sevilla. Grado en Ingeniería de Tecnologías Industriale

    機械学習による歩行中の下肢関節キネティクスの推定

    Get PDF
    歩行動作の分析において,下肢の関節トルクや関節トルクパワー(以下JT,JTP)はキネティクス分析に広く用いられている(Neckel, 2008; Rozumalski, 2011).これらのキネティクス変量を算出するためには地面反力を必要とし,その計測には一般的にフォースプラットフォーム(以下FP)が用いられる.しかし,FPを用いた動作の計測は計測機器による制約を受けてしまう.そのため,歩行動作分析を行う際,FPを用いずにキネティクス分析が可能になることは重要である.そこでOhら(2013)やLimら(2019)はFPを用いずに歩行中のJTなどを推定する方法を検証した.両研究ともに,矢状面のJTはすべての関節において%RMSE10% 前後で推定されたと報告している.しかし,これらの研究は被験者数が非常に少なく妥当性の検証が不十分であることや,モデル設計の原理が曖昧であった.そこで本研究は,幅広い被験者に対して適用可能な歩行中の下肢JTおよびJTPの推定方法を検討し,推定精度を検証することを目的とした.本研究ではモデルの学習のため被験者300名計2909試技(通常歩行)のデータセットを用いた.また,モデルデータとは異なる環境で計測された74名148試技の外部データ1および12名95試技の外部データ2をモデルの精度検証のため用いた.また,JTの推定のため,セグメントの並進および角加速度を入力変数として用いたInverse Dynamics モデルと関節角度を入力変数として用いたJoint Angleモデルの2つの学習モデルを設計した.設計されたモデルにより推定されたJTは横断面における足関節のJTを除く全てのJTで真値との相関係数が0.90以上(ID:0.94~0.98,JA:0.93~0.99),矢状面におけるJTは%RMSE10%前後(ID:7.2~11.7%,JA:6.6~11.1%)であった.推定値により計算されたJTPは全て真値との相関係数が0.90以上(ID:0.93~0.98, JA:0.92~0.99),%RMSE10%前後(ID:5.7~10.1%,JA:5.5~9.9%)であった.また,外部データにおいて,特に矢状面のJTは一定以上の精度で推定可能であることが分かった.また,モデルデータとは異なる年齢層に対する推定精度に差はみられなかった.しかし,通常歩行以外の歩行速度の試技に対して適用する場合,股および膝関節トルクの推定精度が低下することが分かった.以上の結果より,幅広い被験者に対応できるモデルを設計したが,通常歩行とは異なる歩行速度に対しては注意が必要であることが示唆された.電気通信大学202

    Patient Movement Monitoring Based on IMU and Deep Learning

    Get PDF
    Osteoarthritis (OA) is the leading cause of disability among the aging population in the United States and is frequently treated by replacing deteriorated joints with metal and plastic components. Developing better quantitative measures of movement quality to track patients longitudinally in their own homes would enable personalized treatment plans and hasten the advancement of promising new interventions. Wearable sensors and machine learning used to quantify patient movement could revolutionize the diagnosis and treatment of movement disorders. The purpose of this dissertation was to overcome technical challenges associated with the use of wearable sensors, specifically Inertial Measurement Units (IMUs), as a diagnostic tool for osteoarthritic (OA) and total knee replacement patients (TKR) through a detailed biomechanical assessment and development of machine learning algorithms. Specifically, the first study developed a relevant dataset consisting of IMU and associated biomechanical parameters of OA and TKR patients performing various activities, created a machine learning-based framework to accurately estimate spatiotemporal movement characteristics from IMU during level ground walking, and defined optimum sensor configuration associated with the patient population and activity. The second study designed a framework to generate synthetic kinematic and associated IMU data as well as investigated the influence of adding synthetic data into training-measured data on deep learning model performance. The third study investigated the kinematic variation between two patient’s population across various activities: stair ascent, stair descent, and gait using principle component analysis PCA. Additionally, PCA-based autoencoders were developed to generate synthetic kinematics data for each patient population and activity. The fourth study investigated the potential use of a universal deep learning model for the estimation of lower extremities’ kinematics across various activities. Therefore, this model can be used as a global model for transfer learning methods in future research. This line of study resulted in a machine-learning framework that can be used to estimate biomechanical movements based on a stream of signals emitted from low-cost and portable IMUs. Eventually, this could lead to a simple clinical tool for tracking patients\u27 movements in their own homes and translating those movements into diagnostic metrics that clinicians will be able to use to tailor treatment to each patient\u27s needs in the future
    corecore