33,199 research outputs found
Learning to Reconstruct People in Clothing from a Single RGB Camera
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach
Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies
In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model constructionComment: 31 pages, 26 figure
Human-activity-centered measurement system:challenges from laboratory to the real environment in assistive gait wearable robotics
Assistive gait wearable robots (AGWR) have shown a great advancement in developing intelligent devices to assist human in their activities of daily living (ADLs). The rapid technological advancement in sensory technology, actuators, materials and computational intelligence has sped up this development process towards more practical and smart AGWR. However, most assistive gait wearable robots are still confined to be controlled, assessed indoor and within laboratory environments, limiting any potential to provide a real assistance and rehabilitation required to humans in the real environments. The gait assessment parameters play an important role not only in evaluating the patient progress and assistive device performance but also in controlling smart self-adaptable AGWR in real-time. The self-adaptable wearable robots must interactively conform to the changing environments and between users to provide optimal functionality and comfort. This paper discusses the performance parameters, such as comfortability, safety, adaptability, and energy consumption, which are required for the development of an intelligent AGWR for outdoor environments. The challenges to measuring the parameters using current systems for data collection and analysis using vision capture and wearable sensors are presented and discussed
Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect
Microsoft Kinect camera and its skeletal tracking capabilities have been
embraced by many researchers and commercial developers in various applications
of real-time human movement analysis. In this paper, we evaluate the accuracy
of the human kinematic motion data in the first and second generation of the
Kinect system, and compare the results with an optical motion capture system.
We collected motion data in 12 exercises for 10 different subjects and from
three different viewpoints. We report on the accuracy of the joint localization
and bone length estimation of Kinect skeletons in comparison to the motion
capture. We also analyze the distribution of the joint localization offsets by
fitting a mixture of Gaussian and uniform distribution models to determine the
outliers in the Kinect motion data. Our analysis shows that overall Kinect 2
has more robust and more accurate tracking of human pose as compared to Kinect
1.Comment: 10 pages, IEEE International Conference on Healthcare Informatics
2015 (ICHI 2015
- …