1,261 research outputs found
Deep Forward and Inverse Perceptual Models for Tracking and Prediction
We consider the problems of learning forward models that map state to
high-dimensional images and inverse models that map high-dimensional images to
state in robotics. Specifically, we present a perceptual model for generating
video frames from state with deep networks, and provide a framework for its use
in tracking and prediction tasks. We show that our proposed model greatly
outperforms standard deconvolutional methods and GANs for image generation,
producing clear, photo-realistic images. We also develop a convolutional neural
network model for state estimation and compare the result to an Extended Kalman
Filter to estimate robot trajectories. We validate all models on a real robotic
system.Comment: 8 pages, International Conference on Robotics and Automation (ICRA)
201
A Distance-Geometric Method for Recovering Robot Joint Angles From an RGB Image
Autonomous manipulation systems operating in domains where human intervention
is difficult or impossible (e.g., underwater, extraterrestrial or hazardous
environments) require a high degree of robustness to sensing and communication
failures. Crucially, motion planning and control algorithms require a stream of
accurate joint angle data provided by joint encoders, the failure of which may
result in an unrecoverable loss of functionality. In this paper, we present a
novel method for retrieving the joint angles of a robot manipulator using only
a single RGB image of its current configuration, opening up an avenue for
recovering system functionality when conventional proprioceptive sensing is
unavailable. Our approach, based on a distance-geometric representation of the
configuration space, exploits the knowledge of a robot's kinematic model with
the goal of training a shallow neural network that performs a 2D-to-3D
regression of distances associated with detected structural keypoints. It is
shown that the resulting Euclidean distance matrix uniquely corresponds to the
observed configuration, where joint angles can be recovered via
multidimensional scaling and a simple inverse kinematics procedure. We evaluate
the performance of our approach on real RGB images of a Franka Emika Panda
manipulator, showing that the proposed method is efficient and exhibits solid
generalization ability. Furthermore, we show that our method can be easily
combined with a dense refinement technique to obtain superior results.Comment: IFAC 202
Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from Depth Maps
Knowing the exact 3D location of workers and
robots in a collaborative environment enables several real applications,
such as the detection of unsafe situations or the study
of mutual interactions for statistical and social purposes. In this
paper, we propose a non-invasive and light-invariant framework
based on depth devices and deep neural networks to estimate
the 3D pose of robots from an external camera. The method
can be applied to any robot without requiring hardware access
to the internal states. We introduce a novel representation of
the predicted pose, namely Semi-Perspective Decoupled Heatmaps
(SPDH), to accurately compute 3D joint locations in world
coordinates adapting efficient deep networks designed for the 2D
Human Pose Estimation. The proposed approach, which takes as
input a depth representation based on XYZ coordinates, can be
trained on synthetic depth data and applied to real-world settings
without the need for domain adaptation techniques. To this end,
we present the SimBa dataset, based on both synthetic and real
depth images, and use it for the experimental evaluation. Results
show that the proposed approach, made of a specific depth map
representation and the SPDH, overcomes the current state of the
art
- …