93 research outputs found

    Co-simulation of human digital twins and wearable inertial sensors to analyse gait event estimation

    Get PDF
    We propose a co-simulation framework comprising biomechanical human body models and wearable inertial sensor models to analyse gait events dynamically, depending on inertial sensor type, sensor positioning, and processing algorithms. A total of 960 inertial sensors were virtually attached to the lower extremities of a validated biomechanical model and shoe model. Walking of hemiparetic patients was simulated using motion capture data (kinematic simulation). Accelerations and angular velocities were synthesised according to the inertial sensor models. A comprehensive error analysis of detected gait events versus reference gait events of each simulated sensor position across all segments was performed. For gait event detection, we considered 1-, 2-, and 4-phase gait models. Results of hemiparetic patients showed superior gait event estimation performance for a sensor fusion of angular velocity and acceleration data with lower nMAEs (9%) across all sensor positions compared to error estimation with acceleration data only. Depending on algorithm choice and parameterisation, gait event detection performance increased up to 65%. Our results suggest that user personalisation of IMU placement should be pursued as a first priority for gait phase detection, while sensor position variation may be a secondary adaptation target. When comparing rotatory and translatory error components per body segment, larger interquartile ranges of rotatory errors were observed for all phase models i.e., repositioning the sensor around the body segment axis was more harmful than along the limb axis for gait phase detection. The proposed co-simulation framework is suitable for evaluating different sensor modalities, as well as gait event detection algorithms for different gait phase models. The results of our analysis open a new path for utilising biomechanical human digital twins in wearable system design and performance estimation before physical device prototypes are deployed

    Cloud point labelling in optical motion capture systems

    Get PDF
    109 p.This Thesis deals with the task of point labeling involved in the overall workflow of Optical Motion Capture Systems. Human motion capture by optical sensors produces at each frame snapshots of the motion as a cloud of points that need to be labeled in order to carry out ensuing motion analysis. The problem of labeling is tackled as a classification problem, using machine learning techniques as AdaBoost or Genetic Search to train a set of weak classifiers, gathered in turn in an ensemble of partial solvers. The result is used to feed an online algorithm able to provide a marker labeling at a target detection accuracy at a reduced computational cost. On the other hand, in contrast to other approaches the use of misleading temporal correlations has been discarded, strengthening the process against failure due to occasional labeling errors. The effectiveness of the approach is demonstrated on a real dataset obtained from the measurement of gait motion of persons, for which the ground truth labeling has been verified manually. In addition to the above, a broad sight regarding the field of Motion Capture and its optical branch is provided to the reader: description, composition, state of the art and related work. Shall it serve as suitable framework to highlight the importance and ease the understanding of the point labeling
    corecore