72 research outputs found
FROM TRADITIONAL TO INTERDISCIPLINARY APPROACHES FOR INERTIAL BODY MOTION CAPTURE
Inertial motion capture (mocap) is a widespread technology for capturing human motion outside the lab, e.g. for applications in sports, ergonomics, rehabilitation and personal fitness. Even though mature systems are commercially available, inertial mocap is still a subject of research due to a number of limitations: besides measurement errors and sparsity, also simplified body models and calibration routines, soft tissue artefacts and varying body shapes lead to limited precision and robustness compared to optical gold standard systems. The goal of the research group wearHEALTH at the TU Kaiserslautern is to tackle these challenges by bringing together ideas and approaches from different disciplines including biomechanics, sensor fusion, computer vision and (optimal control) simulation. In this talk, we will present an overview of our approaches and applications, starting from the more traditional ones
JointTracker: Real-Time Inertial Kinematic Chain Tracking With Joint Position Estimation
In-field human motion capture (HMC) is drawing increasing attention due to the multitude of application areas. Plenty of research is currently invested in camera-based (markerless) HMC, with the advantage of no infrastructure being required on the body, and additional context information being available from the surroundings. However, the inherent drawbacks of camera-based approaches are the limited field of view and occlusions. In contrast, inertial HMC (IHMC) does not suffer from occlusions, thus being a promising approach for capturing human motion outside the laboratory. However, one major challenge of such methods is the necessity of spatial registration. Typically, during a predefined calibration sequence, the orientation and location of each inertial sensor are registered with respect to the underlying skeleton model. This work contributes to calibration-free IHMC, as it proposes a recursive estimator for the simultaneous online estimation of all sensor poses and joint positions of a kinematic chain model like the human skeleton. The full derivation from an optimization objective is provided. The approach can directly be applied to a synchronized data stream from a body-mounted inertial sensor network. Successful evaluations are demonstrated on noisy simulated data from a three-link chain, real lower-body walking data from 25 young, healthy persons, and walking data captured from a humanoid robot. The estimated and derived quantities, global and relative sensor orientations, joint positions, and segment lengths can be exploited for human motion analysis and anthropometric measurements, as well as in the context of hybrid markerless visual-inertial HMC
Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion
Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error)
Validity of inertial sensor based 3D joint kinematics of static and dynamic sport and physiotherapy specific movements
3D joint kinematics can provide important information about the quality of movements. Optical motion capture systems (OMC) are considered the gold standard in motion analysis. However, in recent years, inertial measurement units (IMU) have become a promising alternative. The aim of this study was to validate IMU-based 3D joint kinematics of the lower extremities during different movements. Twenty-eight healthy subjects participated in this study. They performed bilateral squats (SQ), single-leg squats (SLS) and countermovement jumps (CMJ). The IMU kinematics was calculated using a recently-described sensor-fusion algorithm. A marker based OMC system served as a reference. Only the technical error based on algorithm performance was considered, incorporating OMC data for the calibration, initialization, and a biomechanical model. To evaluate the validity of IMU-based 3D joint kinematics, root mean squared error (RMSE), range of motion error (ROME), Bland-Altman (BA) analysis as well as the coefficient of multiple correlation (CMC) were calculated. The evaluation was twofold. First, the IMU data was compared to OMC data based on marker clusters; and, second based on skin markers attached to anatomical landmarks. The first evaluation revealed means for RMSE and ROME for all joints and tasks below 3°. The more dynamic task, CMJ, revealed error measures approximately 1° higher than the remaining tasks. Mean CMC values ranged from 0.77 to 1 over all joint angles and all tasks. The second evaluation showed an increase in the RMSE of 2.28°– 2.58° on average for all joints and tasks. Hip flexion revealed the highest average RMSE in all tasks (4.87°– 8.27°). The present study revealed a valid IMU-based approach for the measurement of 3D joint kinematics in functional movements of varying demands. The high validity of the results encourages further development and the extension of the present approach into clinical settings
Visualization of interindividual differences in spinal dynamics in the presence of intraindividual variabilities
Surface topography systems enable the capture of
spinal dynamic movement. A visualization of possible unique
movement patterns appears to be difficult due to large intraclass and small inter-class variabilities. Therefore, we investigated
a visualization approach using Siamese neural networks (SNN)
and checked, if the identification of individuals is possible based
on dynamic spinal data. The presented visualization approach
seems promising in visualizing subjects in the presence of
intraindividual variability between different gait cycles as well
as day-to-day variability. Overall, the results indicate a possible
existence of a personal spinal ‘fingerprint’. The work forms the
basis for an objective comparison of subjects and the transfer of
the method to clinical use cases
Personalization Paradox in Behavior Change Apps:Lessons from a Social Comparison-Based Personalized App for Physical Activity
Social comparison-based features are widely used in social computing apps.
However, most existing apps are not grounded in social comparison theories and
do not consider individual differences in social comparison preferences and
reactions. This paper is among the first to automatically personalize social
comparison targets. In the context of an m-health app for physical activity, we
use artificial intelligence (AI) techniques of multi-armed bandits. Results
from our user study (n=53) indicate that there is some evidence that motivation
can be increased using the AI-based personalization of social comparison. The
detected effects achieved small-to-moderate effect sizes, illustrating the
real-world implications of the intervention for enhancing motivation and
physical activity. In addition to design implications for social comparison
features in social apps, this paper identified the personalization paradox, the
conflict between user modeling and adaptation, as a key design challenge of
personalized applications for behavior change. Additionally, we propose
research directions to mitigate this Personalization Paradox
Real-time vision-based tracking and reconstruction
Many of the recent real-time markerless camera tracking systems assume the existence of a complete 3D model of the target scene. Also the system developed in the MATRIS project assumes that a scene model is available. This can be a freeform surface model generated automatically from an image sequence using structure from motion techniques or a textured CAD model built manually using a commercial software. The offline model provides 3D anchors to the tracking. These are stable natural landmarks, which are not updated and thus prevent an accumulating error (drift) in the camera registration by giving an absolute reference. However, sometimes it is not feasible to model the entire target scene in advance, e.g. parts, which are not static, or one would like to employ existing CAD models, which are not complete. In order to allow camera movements beyond the parts of the environment modelled in advance it is desired to derive additional 3D information online. Therefore, a markerless camera tracking system for calibrated perspective cameras has been developed, which employs 3D information about the target scene and complements this knowledge online by reconstruction of 3D points. The proposed algorithm is robust and reduces drift, the most dominant problem of simultaneous localisation and mapping (SLAM), in realtime by a combination of the following crucial points: (1) stable tracking of longterm features on the 2D level; (2) use of robust methods like the well-known Random Sampling Consensus (RANSAC) for all 3D estimation processes; (3) consequent propagation of errors and uncertainties; (4) careful feature selection and map management; (5) incorporation of epipolar constraints into the pose estimation. Validation results on the operation of the system on synthetic and real data are presented
On Inertial Body Tracking in the Presence of Model Calibration Errors
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity
Joint bilateral mesh denoising using color information and local anti-shrinking
Recent 3D reconstruction algorithms are able to generate colored meshes with high resolution details of given
objects. However, due to several reasons the reconstructions still contain some noise. In this paper we propose
the new Joint Bilateral Mesh Denoising (JBMD), which is an anisotropic filter for highly precise and smooth
mesh denoising. Compared to state of the art algorithms it uses color information as an additional constraint for
denoising; following the observation that geometry and color changes often coincide. We face the well-known
mesh shrinking problem by a new local anti-shrinking, leading to precise edge preservation. In addition we use an
increasing smoothing sensitivity for higher numbers of iterations. We show in our evaluation with three different
categories of testdata that our contributions lead to high precision results, which outperform competing algorithms.
Furthermore, our JBMD algorithm converges on a minimal error level for higher numbers of iterations
- …