8,375 research outputs found
Sim2Real View Invariant Visual Servoing by Recurrent Control
Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video:
https://fsadeghi.github.io/Sim2RealViewInvariantServ
Bi-Mapper: Holistic BEV Semantic Mapping for Autonomous Driving
A semantic map of the road scene, covering fundamental road elements, is an
essential ingredient in autonomous driving systems. It provides important
perception foundations for positioning and planning when rendered in the
Bird's-Eye-View (BEV). Currently, the prior knowledge of hypothetical depth can
guide the learning of translating front perspective views into BEV directly
with the help of calibration parameters. However, it suffers from geometric
distortions in the representation of distant objects. In addition, another
stream of methods without prior knowledge can learn the transformation between
front perspective views and BEV implicitly with a global view. Considering that
the fusion of different learning methods may bring surprising beneficial
effects, we propose a Bi-Mapper framework for top-down road-scene semantic
understanding, which incorporates a global view and local prior knowledge. To
enhance reliable interaction between them, an asynchronous mutual learning
strategy is proposed. At the same time, an Across-Space Loss (ASL) is designed
to mitigate the negative impact of geometric distortions. Extensive results on
nuScenes and Cam2BEV datasets verify the consistent effectiveness of each
module in the proposed Bi-Mapper framework. Compared with exiting road mapping
networks, the proposed Bi-Mapper achieves 2.1% higher IoU on the nuScenes
dataset. Moreover, we verify the generalization performance of Bi-Mapper in a
real-world driving scenario. The source code is publicly available at
https://github.com/lynn-yu/Bi-Mapper.Comment: Accepted to IEEE Robotics and Automation Letters (RA-L). The source
code is publicly available at https://github.com/lynn-yu/Bi-Mappe
- …