906 research outputs found
Data-driven robotic manipulation of cloth-like deformable objects : the present, challenges and future prospects
Manipulating cloth-like deformable objects (CDOs) is a long-standing problem in the robotics community. CDOs are flexible (non-rigid) objects that do not show a detectable level of compression strength while two points on the article are pushed towards each other and include objects such as ropes (1D), fabrics (2D) and bags (3D). In general, CDOs’ many degrees of freedom (DoF) introduce severe self-occlusion and complex state–action dynamics as significant obstacles to perception and manipulation systems. These challenges exacerbate existing issues of modern robotic control methods such as imitation learning (IL) and reinforcement learning (RL). This review focuses on the application details of data-driven control methods on four major task families in this domain: cloth shaping, knot tying/untying, dressing and bag manipulation. Furthermore, we identify specific inductive biases in these four domains that present challenges for more general IL and RL algorithms.Publisher PDFPeer reviewe
Reinforced Axial Refinement Network for Monocular 3D Object Detection
Monocular 3D object detection aims to extract the 3D position and properties
of objects from a 2D input image. This is an ill-posed problem with a major
difficulty lying in the information loss by depth-agnostic cameras.
Conventional approaches sample 3D bounding boxes from the space and infer the
relationship between the target object and each of them, however, the
probability of effective samples is relatively small in the 3D space. To
improve the efficiency of sampling, we propose to start with an initial
prediction and refine it gradually towards the ground truth, with only one 3d
parameter changed in each step. This requires designing a policy which gets a
reward after several steps, and thus we adopt reinforcement learning to
optimize it. The proposed framework, Reinforced Axial Refinement Network
(RAR-Net), serves as a post-processing stage which can be freely integrated
into existing monocular 3D detection methods, and improve the performance on
the KITTI dataset with small extra computational costs.Comment: Accepted by ECCV 202
Dynamic Handover: Throw and Catch with Bimanual Hands
Humans throw and catch objects all the time. However, such a seemingly common
skill introduces a lot of challenges for robots to achieve: The robots need to
operate such dynamic actions at high-speed, collaborate precisely, and interact
with diverse objects. In this paper, we design a system with two multi-finger
hands attached to robot arms to solve this problem. We train our system using
Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer
to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple
novel algorithm designs including learning a trajectory prediction model for
the object. Such a model can help the robot catcher has a real-time estimation
of where the object will be heading, and then react accordingly. We conduct our
experiments with multiple objects in the real-world system, and show
significant improvements over multiple baselines. Our project page is available
at \url{https://binghao-huang.github.io/dynamic_handover/}.Comment: Accepted at CoRL 2023.
https://binghao-huang.github.io/dynamic_handover
Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation
The field of robotic Flexible Endoscopes (FEs) has progressed significantly,
offering a promising solution to reduce patient discomfort. However, the
limited autonomy of most robotic FEs results in non-intuitive and challenging
manoeuvres, constraining their application in clinical settings. While previous
studies have employed lumen tracking for autonomous navigation, they fail to
adapt to the presence of obstructions and sharp turns when the endoscope faces
the colon wall. In this work, we propose a Deep Reinforcement Learning
(DRL)-based navigation strategy that eliminates the need for lumen tracking.
However, the use of DRL methods poses safety risks as they do not account for
potential hazards associated with the actions taken. To ensure safety, we
exploit a Constrained Reinforcement Learning (CRL) method to restrict the
policy in a predefined safety regime. Moreover, we present a model selection
strategy that utilises Formal Verification (FV) to choose a policy that is
entirely safe before deployment. We validate our approach in a virtual
colonoscopy environment and report that out of the 300 trained policies, we
could identify three policies that are entirely safe. Our work demonstrates
that CRL, combined with model selection through FV, can improve the robustness
and safety of robotic behaviour in surgical applications.Comment: Accepted in the IEEE International Conference on Intelligent Robots
and Systems (IROS), 2023. [Corsi, Marzari and Pore contributed equally
- …