19,925 research outputs found
Learning to Reconstruct People in Clothing from a Single RGB Camera
We present a learning-based model to infer the personalized 3D shape of people from a few frames (1-8) of a monocular video in which the person is moving, in less than 10 seconds with a reconstruction accuracy of 5mm. Our model learns to predict the parameters of a statistical body model and instance displacements that add clothing and hair to the shape. The model achieves fast and accurate predictions based on two key design choices. First, by predicting shape in a canonical T-pose space, the network learns to encode the images of the person into pose-invariant latent codes, where the information is fused. Second, based on the observation that feed-forward predictions are fast but do not always align with the input images, we predict using both, bottom-up and top-down streams (one per view) allowing information to flow in both directions. Learning relies only on synthetic 3D data. Once learned, the model can take a variable number of frames as input, and is able to reconstruct shapes even from a single image with an accuracy of 6mm. Results on 3 different datasets demonstrate the efficacy and accuracy of our approach
Deep Haptic Model Predictive Control for Robot-Assisted Dressing
Robot-assisted dressing offers an opportunity to benefit the lives of many
people with disabilities, such as some older adults. However, robots currently
lack common sense about the physical implications of their actions on people.
The physical implications of dressing are complicated by non-rigid garments,
which can result in a robot indirectly applying high forces to a person's body.
We present a deep recurrent model that, when given a proposed action by the
robot, predicts the forces a garment will apply to a person's body. We also
show that a robot can provide better dressing assistance by using this model
with model predictive control. The predictions made by our model only use
haptic and kinematic observations from the robot's end effector, which are
readily attainable. Collecting training data from real world physical
human-robot interaction can be time consuming, costly, and put people at risk.
Instead, we train our predictive model using data collected in an entirely
self-supervised fashion from a physics-based simulation. We evaluated our
approach with a PR2 robot that attempted to pull a hospital gown onto the arms
of 10 human participants. With a 0.2s prediction horizon, our controller
succeeded at high rates and lowered applied force while navigating the garment
around a persons fist and elbow without getting caught. Shorter prediction
horizons resulted in significantly reduced performance with the sleeve catching
on the participants' fists and elbows, demonstrating the value of our model's
predictions. These behaviors of mitigating catches emerged from our deep
predictive model and the controller objective function, which primarily
penalizes high forces.Comment: 8 pages, 12 figures, 1 table, 2018 IEEE International Conference on
Robotics and Automation (ICRA
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
- …