3,441 research outputs found
A survey on human performance capture and animation
With the rapid development of computing technology, three-dimensional (3D) human body
models and their dynamic motions are widely used in the digital entertainment industry. Human perfor-
mance mainly involves human body shapes and motions. Key research problems include how to capture
and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate
human body motions with physical e�ects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely
human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim-
ulation, and further discuss future research problems and directions. We hope this will be helpful for
readers to have a comprehensive understanding of human performance capture and animatio
Collaborative Surgical Robots:Optical Tracking During Endovascular Operations
Endovascular interventions usually require meticulous handling of surgical instruments and constant monitoring of the operating room workspace. To address these challenges, robotic- assisted technologies and tracking techniques are increasingly being developed. Specifically, the limited workspace and potential for a collision between the robot and surrounding dynamic obstacles are important aspects that need to be considered. This article presents a navigation system developed to assist clinicians with the magnetic actuation of endovascular catheters using multiple surgical robots. We demonstrate the actuation of a magnetic catheter in an experimental arterial testbed with dynamic obstacles. The motions and trajectory planning of two six degrees of freedom (6-DoF) robotic arms are established through passive markerguided motion planning. We achieve an overall 3D tracking accuracy of 2.3 ± 0.6 mm for experiments involving dynamic obstacles. We conclude that integrating multiple optical trackers with the online planning of two serial-link manipulators is useful to support the treatment of endovascular diseases and aid clinicians during interventions
GRAB: A Dataset of Whole-Body Human Grasping of Objects
Training computers to understand, model, and synthesize human grasping
requires a rich dataset containing complex 3D object shapes, detailed contact
information, hand pose and shape, and the 3D body motion over time. While
"grasping" is commonly thought of as a single hand stably lifting an object, we
capture the motion of the entire body and adopt the generalized notion of
"whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping
Actions with Bodies), of whole-body grasps, containing full 3D shape and pose
sequences of 10 subjects interacting with 51 everyday objects of varying shape
and size. Given MoCap markers, we fit the full 3D body shape and pose,
including the articulated face and hands, as well as the 3D object pose. This
gives detailed 3D meshes over time, from which we compute contact between the
body and object. This is a unique dataset, that goes well beyond existing ones
for modeling and understanding how humans grasp and manipulate objects, how
their full body is involved, and how interaction varies with the task. We
illustrate the practical value of GRAB with an example application; we train
GrabNet, a conditional generative network, to predict 3D hand grasps for unseen
3D object shapes. The dataset and code are available for research purposes at
https://grab.is.tue.mpg.de.Comment: ECCV 202
Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation
Direct prediction of 3D body pose and shape remains a challenge even for
highly parameterized deep learning models. Mapping from the 2D image space to
the prediction space is difficult: perspective ambiguities make the loss
function noisy and training data is scarce. In this paper, we propose a novel
approach (Neural Body Fitting (NBF)). It integrates a statistical body model
within a CNN, leveraging reliable bottom-up semantic body part segmentation and
robust top-down body model constraints. NBF is fully differentiable and can be
trained using 2D and 3D annotations. In detailed experiments, we analyze how
the components of our model affect performance, especially the use of part
segmentations as an explicit intermediate representation, and present a robust,
efficiently trainable framework for 3D human pose estimation from 2D images
with competitive results on standard benchmarks. Code will be made available at
http://github.com/mohomran/neural_body_fittingComment: 3DV 201
Recommended from our members
Application and Evaluation of Lighthouse Technology for Precision Motion Capture
This thesis presents the development towards a system that can capture and quantify motion for applications in biomechanical and medical fields demanding precision motion tracking using the lighthouse technology. Commercially known as SteamVR tracking, the lighthouse technology is a motion tracking system developed for virtual reality applications that makes use of patterned infrared light sources to highlight trackers (objects embedded with photodiodes) to obtain their pose or spatial position and orientation. Current motion capture systems such as the camera-based motion capture are expensive and not readily available outside of research labs. This thesis provides a case for low-cost motion capture systems. The technology is applied to quantify motion to draw inferences about biomechanics capture and analysis, quantification of gait, and prosthetic alignment. Possible shortcomings for data acquisition using this system for the stated applications have been addressed. The repeatability of the system has been established by determining the standard deviation error for multiple trials based on a motion trajectory using a seven degree-of-freedom robot arm. The accuracy testing for the system is based on cross-validation between the lighthouse technology data and transformations derived using joint angles by developing a forward kinematics model for the robot’s end-effector pose. The underlying principle for motion capture using this system is that multiple trackers placed on limb segments allow to record the position and orientation of the segments in relation to a set global frame. Joint angles between the segments can then be calculated from the recorded positions and orientations of each tracker using inverse kinematics. In this work, inverse kinematics for rigid bodies was based on calculating homogeneous transforms to the individual trackers in the model’s reference frame to find the respective Euler angles as well as using the analytical approach to solve for joint variables in terms of known geometric parameters. This work was carried out on a phantom prosthetic limb. A custom application-specific motion tracker was also developed using a hardware development kit which would be further optimized for subsequent studies involving biomechanics motion capture
Articulation estimation and real-time tracking of human hand motions
Schröder M. Articulation estimation and real-time tracking of human hand motions. Bielefeld: Universität Bielefeld; 2015.This thesis deals with the problem of estimating and tracking the full articulation of
human hands. Algorithmically recovering hand articulations is a challenging problem
due to the hand’s high number of degrees of freedom and the complexity of its
motions. Besides the accuracy and efficiency of the hand posture estimation, hand
tracking methods are faced with issues such as invasiveness, ease of deployment
and sensor artifacts. In this thesis several different hand tracking approaches are examined,
including marker-based optical motion capture, data-driven discriminative
visual tracking and generative tracking based on articulated registration, and various
contributions to these areas are presented. The problem of optimally placing reduced
marker sets on a performer’s hand for optical hand motion capture is explored. A
method is proposed that automatically generates functional reduced marker layouts
by optimizing for their numerical stability and geometric feasibility. A data-driven
discriminative tracking approach based on matching the hand’s appearance in the
sensor data with an image database is investigated. In addition to an efficient nearest
neighbor search for images, a combination of discriminative initialization and
generative refinement is employed. The method’s applicability is demonstrated in
interactive robot teleoperation. Various real human hand motions are captured and
statistically analyzed to derive low-dimensional representations of hand articulations.
An adaptive hand posture subspace concept is developed and integrated into a generative
real-time hand tracking approach that aligns a virtual hand model with sensor
point clouds based on constrained inverse kinematics. Generative hand tracking is
formulated as a regularized articulated registration process, in which geometrical
model fitting is combined with statistical, kinematic and temporal regularization
priors. A registration concept that combines 2D and 3D alignment and explicitly accounts
for occlusions and visibility constraints is devised. High-quality, non-invasive,
real-time hand tracking is achieved based on this regularized articulated registration
formulation
- …