7,088 research outputs found
A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots
Legged robots are becoming popular not only in research, but also in
industry, where they can demonstrate their superiority over wheeled machines in
a variety of applications. Either when acting as mobile manipulators or just as
all-terrain ground vehicles, these machines need to precisely track the desired
base and end-effector trajectories, perform Simultaneous Localization and
Mapping (SLAM), and move in challenging environments, all while keeping
balance. A crucial aspect for these tasks is that all onboard sensors must be
properly calibrated and synchronized to provide consistent signals for all the
software modules they feed. In this paper, we focus on the problem of
calibrating the relative pose between a set of cameras and the base link of a
quadruped robot. This pose is fundamental to successfully perform sensor
fusion, state estimation, mapping, and any other task requiring visual
feedback. To solve this problem, we propose an approach based on factor graphs
that jointly optimizes the mutual position of the cameras and the robot base
using kinematics and fiducial markers. We also quantitatively compare its
performance with other state-of-the-art methods on the hydraulic quadruped
robot HyQ. The proposed approach is simple, modular, and independent from
external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic
Computing (IEEE IRC 2019)
Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB
We propose a new single-shot method for multi-person 3D pose estimation in
general scenes from a monocular RGB camera. Our approach uses novel
occlusion-robust pose-maps (ORPM) which enable full body pose inference even
under strong partial occlusions by other people and objects in the scene. ORPM
outputs a fixed number of maps which encode the 3D joint locations of all
people in the scene. Body part associations allow us to infer 3D pose for an
arbitrary number of people without explicit bounding box prediction. To train
our approach we introduce MuCo-3DHP, the first large scale training data set
showing real images of sophisticated multi-person interactions and occlusions.
We synthesize a large corpus of multi-person images by compositing images of
individual people (with ground truth from mutli-view performance capture). We
evaluate our method on our new challenging 3D annotated multi-person test set
MuPoTs-3D where we achieve state-of-the-art performance. To further stimulate
research in multi-person 3D pose estimation, we will make our new datasets, and
associated code publicly available for research purposes.Comment: International Conference on 3D Vision (3DV), 201
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
- …