1,100 research outputs found
Markerless measurement techniques for motion analysis in sports science
Markerless motion capture system and X-ray fluoroscopy as two markerless measurement systems were introduced to the application method in sports biomechanical areas. An overview of the technological process, data accuracy, suggested movements, and recommended body parts were explained. The markerless motion capture system consists of four parts: camera, body model, image feature, and algorithms. Even though the markerless motion capture system seems promising, it is not yet known whether these systems can be used to achieve the required accuracy and whether they can be appropriately used in sports biomechanics and clinical research. The biplane fluoroscopy technique analyzes motion data by collecting, image calibrating, and processing, which is effective for determining small joint kinematic changes and calculating joint angles. The method was used to measure walking and jumping movements primarily because of the experimental conditions and mainly to detect the data of lower limb joints
Noise-in, Bias-out: Balanced and Real-time MoCap Solving
Real-time optical Motion Capture (MoCap) systems have not benefited from the
advances in modern data-driven modeling. In this work we apply machine learning
to solve noisy unstructured marker estimates in real-time and deliver robust
marker-based MoCap even when using sparse affordable sensors. To achieve this
we focus on a number of challenges related to model training, namely the
sourcing of training data and their long-tailed distribution. Leveraging
representation learning we design a technique for imbalanced regression that
requires no additional data or labels and improves the performance of our model
in rare and challenging poses. By relying on a unified representation, we show
that training such a model is not bound to high-end MoCap training data
acquisition, and exploit the advances in marker-less MoCap to acquire the
necessary data. Finally, we take a step towards richer and affordable MoCap by
adapting a body model-based inverse kinematics solution to account for
measurement and inference uncertainty, further improving performance and
robustness. Project page: https://moverseai.github.io/noise-tailComment: Project page: https://moverseai.github.io/noise-tai
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
EventCap: Monocular 3D Capture of High-Speed Human Motions using an Event Camera
The high frame rate is a critical requirement for capturing fast human motions. In this setting, existing markerless image-based methods are constrained by the lighting requirement, the high data bandwidth and the consequent high computation overhead. In this paper, we propose EventCap --- the first approach for 3D capturing of high-speed human motions using a single event camera. Our method combines model-based optimization and CNN-based human pose detection to capture high-frequency motion details and to reduce the drifting in the tracking. As a result, we can capture fast motions at millisecond resolution with significantly higher data efficiency than using high frame rate videos. Experiments on our new event-based fast human motion dataset demonstrate the effectiveness and accuracy of our method, as well as its robustness to challenging lighting conditions
Markerless Human Motion Analysis
Measuring and understanding human motion is crucial in several domains,
ranging from neuroscience, to rehabilitation and sports biomechanics. Quantitative
information about human motion is fundamental to study how our
Central Nervous System controls and organizes movements to functionally
evaluate motor performance and deficits. In the last decades, the research in
this field has made considerable progress. State-of-the-art technologies that
provide useful and accurate quantitative measures rely on marker-based systems.
Unfortunately, markers are intrusive and their number and location must
be determined a priori. Also, marker-based systems require expensive laboratory
settings with several infrared cameras. This could modify the naturalness
of a subject\u2019s movements and induce discomfort. Last, but not less important,
they are computationally expensive in time and space. Recent advances on
markerless pose estimation based on computer vision and deep neural networks
are opening the possibility of adopting efficient video-based methods
for extracting movement information from RGB video data. In this contest,
this thesis presents original contributions to the following objectives: (i) the
implementation of a video-based markerless pipeline to quantitatively characterize
human motion; (ii) the assessment of its accuracy if compared with
a gold standard marker-based system; (iii) the application of the pipeline to
different domains in order to verify its versatility, with a special focus on the
characterization of the motion of preterm infants and on gait analysis. With
the proposed approach we highlight that, starting only from RGB videos and
leveraging computer vision and machine learning techniques, it is possible to
extract reliable information characterizing human motion comparable to that
obtained with gold standard marker-based systems
- …