1,157 research outputs found
Video normals from colored lights
We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system
Multigranularity Representations for Human Inter-Actions: Pose, Motion and Intention
Tracking people and their body pose in videos is a central problem in computer vision. Standard tracking representations reason about temporal coherence of detected people and body parts. They have difficulty tracking targets under partial occlusions or rare body poses, where detectors often fail, since the number of training examples is often too small to deal with the exponential variability of such configurations.
We propose tracking representations that track and segment people and their body pose in videos by exploiting information at multiple detection and segmentation granularities when available, whole body, parts or point trajectories.
Detections and motion estimates provide contradictory information in case of false alarm detections or leaking motion affinities. We consolidate contradictory information via graph steering, an algorithm for simultaneous detection and co-clustering in a two-granularity graph of motion trajectories and detections, that corrects motion leakage between correctly detected objects, while being robust to false alarms or spatially inaccurate detections.
We first present a motion segmentation framework that exploits long range motion of point trajectories and large spatial support of image regions.
We show resulting video segments adapt to targets under partial occlusions and deformations.
Second, we augment motion-based representations with object detection for dealing with motion leakage. We demonstrate how to combine dense optical flow trajectory affinities with repulsions from confident detections to reach a global consensus of detection and tracking in crowded scenes.
Third, we study human motion and pose estimation.
We segment hard to detect, fast moving body limbs from their surrounding clutter and match them against pose exemplars to detect body pose under fast motion. We employ on-the-fly human body kinematics to improve tracking of body joints under wide deformations.
We use motion segmentability of body parts for re-ranking a set of body joint candidate trajectories and jointly infer multi-frame body pose and video segmentation.
We show empirically that such multi-granularity tracking representation is worthwhile, obtaining significantly more accurate multi-object tracking and detailed body pose estimation in popular datasets
An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors
Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study (n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.National Science Foundation (U.S.) (GRFP)National Science Foundation (U.S.) (IIS-1453141
Using Distributed Wearable Sensors to Measure and Evaluate Human Lower Limb Motions
This paper presents a wearable sensor approach to motion measurements of human lower limbs, in which subjects perform specified walking trials at self-administered speeds so that their level walking and stair ascent capacity can be effectively evaluated. After an initial sensor alignment with the reduced error, quaternion is used to represent 3-D orientation and an optimized gradient descent algorithm is deployed to calculate the quaternion derivative. Sensors on the shank offer additional information to accurately determine the instances of both swing and stance phases. The Denavit-Hartenberg convention is used to set up the kinematic chains when the foot stays stationary on the ground, producing state constraints to minimize the estimation error of knee position. The reliability of this system, from the measurement point of view, has been validated by means of the results obtained from a commercial motion tracking system, namely, Vicon, on healthy subjects. The step size error and the position estimation accuracy change are studied. The experimental results demonstrated that the extensively existed sensor misplacement and sensor drift problems can be well solved. The proposed self-contained and environment-independent system is capable of providing consistent tracking of human lower limbs without significant drift
Robotic alignment of femoral cutting mask during total knee arthroplasty
open8De Momi E.; Cerveri P.; Gambaretto E.; Marchente M.; Effretti O.; Barbariga S.; Gini G.; Ferrigno G.DE MOMI, Elena; Cerveri, Pietro; Gambaretto, Emiliano; Marchente, Mario; Effretti, O.; Barbariga, S.; Gini, Giuseppina; Ferrigno, Giancarl
DYNAMIC MEASUREMENT OF THREE-DIMENSIONAL MOTION FROM SINGLE-PERSPECTIVE TWO-DIMENSIONAL RADIOGRAPHIC PROJECTIONS
The digital evolution of the x-ray imaging modality has spurred the development of numerous clinical and research tools. This work focuses on the design, development, and validation of dynamic radiographic imaging and registration techniques to address two distinct medical applications: tracking during image-guided interventions, and the measurement of musculoskeletal joint kinematics.
Fluoroscopy is widely employed to provide intra-procedural image-guidance. However, its planar images provide limited information about the location of surgical tools and targets in three-dimensional space. To address this limitation, registration techniques, which extract three-dimensional tracking and image-guidance information from planar images, were developed and validated in vitro.
The ability to accurately measure joint kinematics in vivo is an important tool in studying both normal joint function and pathologies associated with injury and disease, however it still remains a clinical challenge. A technique to measure joint kinematics from single-perspective x-ray projections was developed and validated in vitro, using clinically available radiography equipmen
{Mo2Cap2}: Real-time Mobile {3D} Motion Capture with a Cap-mounted Fisheye Camera
We propose the first real-time approach for the egocentric estimation of 3D human body pose in a wide range of unconstrained everyday activities. This setting has a unique set of challenges, such as mobility of the hardware setup, and robustness to long capture sessions with fast recovery from tracking failures. We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera. From the captured egocentric live stream, our CNN based 3D pose estimation approach runs at 60Hz on a consumer-level GPU. In addition to the novel hardware setup, our other main contributions are: 1) a large ground truth training corpus of top-down fisheye images and 2) a novel disentangled 3D pose estimation approach that takes the unique properties of the egocentric viewpoint into account. As shown by our evaluation, we achieve lower 3D joint error as well as better 2D overlay than the existing baselines
A Multi-body Tracking Framework -- From Rigid Objects to Kinematic Structures
Kinematic structures are very common in the real world. They range from
simple articulated objects to complex mechanical systems. However, despite
their relevance, most model-based 3D tracking methods only consider rigid
objects. To overcome this limitation, we propose a flexible framework that
allows the extension of existing 6DoF algorithms to kinematic structures. Our
approach focuses on methods that employ Newton-like optimization techniques,
which are widely used in object tracking. The framework considers both
tree-like and closed kinematic structures and allows a flexible configuration
of joints and constraints. To project equations from individual rigid bodies to
a multi-body system, Jacobians are used. For closed kinematic chains, a novel
formulation that features Lagrange multipliers is developed. In a detailed
mathematical proof, we show that our constraint formulation leads to an exact
kinematic solution and converges in a single iteration. Based on the proposed
framework, we extend ICG, which is a state-of-the-art rigid object tracking
algorithm, to multi-body tracking. For the evaluation, we create a
highly-realistic synthetic dataset that features a large number of sequences
and various robots. Based on this dataset, we conduct a wide variety of
experiments that demonstrate the excellent performance of the developed
framework and our multi-body tracker.Comment: Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligenc
Recommended from our members
Tracking and modelling motion for biomechanical analysis
This thesis focuses on the problem of determining appropriate skeletal configurations for which a virtual animated character moves to desired positions as smoothly, rapidly, and as accurately as possible. During the last decades, several methods and techniques, sophisticated or heuristic, have been presented to produce smooth and natural solutions to the Inverse Kinematics (IK) problem. However, many of the currently available methods suffer from high computational cost and production of unrealistic poses. In this study, a novel heuristic method, called Forward And Backward Reaching Inverse Kinematics (FABRIK), is proposed, which returns
visually natural poses in real-time, equally comparable with highly sophisticated approaches. It is capable of supporting constraints for most of the known joint types and it can be extended to solve problems with multiple end effectors, multiple targets and closed loops. FABRIK was
compared against the most popular IK approaches and evaluated in terms of its robustness and performance limitations. This thesis also includes a robust methodology for marker prediction under multiple marker occlusion for extended time periods, in order to drive real-time centre of rotation (CoR) estimations. Inferred information from neighbouring markers has been utilised, assuming that the inter-marker distances remain constant over time. This is the first
time where the useful information about the missing markers positions which are partially visible to a single camera is deployed. Experiments demonstrate that the proposed methodology can effectively track the occluded markers with high accuracy, even if the occlusion persists for extended periods of time, recovering in real-time good estimates of the true joint positions.
In addition, the predicted positions of the joints were further improved by employing FABRIK to relocate their positions and ensure a fixed bone length over time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions. Finally, an efficient model for real-time hand tracking and reconstruction that requires a minimum
number of available markers, one on each finger, is presented. The proposed hand model
is highly constrained with joint rotational and orientational constraints, restricting the fingers and palm movements to an appropriate feasible set. FABRIK is then incorporated to estimate the remaining joint positions and to fit them to the hand model. Physiological constraints, such as inertia, abduction, flexion etc, are also incorporated to correct the final hand posture. A mesh deformation algorithm is then applied to visualise the movements of the underlying hand skeleton for comparison with the true hand poses. The mathematical framework used for describing and implementing the techniques discussed within this thesis is Conformal Geometric
Algebra (CGA)
- …