11 research outputs found
Realtime State Estimation with Tactile and Visual sensing. Application to Planar Manipulation
Accurate and robust object state estimation enables successful object
manipulation. Visual sensing is widely used to estimate object poses. However,
in a cluttered scene or in a tight workspace, the robot's end-effector often
occludes the object from the visual sensor. The robot then loses visual
feedback and must fall back on open-loop execution.
In this paper, we integrate both tactile and visual input using a framework
for solving the SLAM problem, incremental smoothing and mapping (iSAM), to
provide a fast and flexible solution. Visual sensing provides global pose
information but is noisy in general, whereas contact sensing is local, but its
measurements are more accurate relative to the end-effector. By combining them,
we aim to exploit their advantages and overcome their limitations. We explore
the technique in the context of a pusher-slider system. We adapt iSAM's
measurement cost and motion cost to the pushing scenario, and use an
instrumented setup to evaluate the estimation quality with different object
shapes, on different surface materials, and under different contact modes
Stable Prehensile Pushing: In-Hand Manipulation with Alternating Sticking Contacts
This paper presents an approach to in-hand manipulation planning that
exploits the mechanics of alternating sticking contact. Particularly, we
consider the problem of manipulating a grasped object using external pushes for
which the pusher sticks to the object. Given the physical properties of the
object, frictional coefficients at contacts and a desired regrasp on the
object, we propose a sampling-based planning framework that builds a pushing
strategy concatenating different feasible stable pushes to achieve the desired
regrasp. An efficient dynamics formulation allows us to plan in-hand
manipulations 100-1000 times faster than our previous work which builds upon a
complementarity formulation. Experimental observations for the generated plans
show that the object precisely moves in the grasp as expected by the planner.
Video Summary -- youtu.be/qOTKRJMx6HoComment: IEEE International Conference on Robotics and Automation 201
Dexterous Manipulation Graphs
We propose the Dexterous Manipulation Graph as a tool to address in-hand
manipulation and reposition an object inside a robot's end-effector. This graph
is used to plan a sequence of manipulation primitives so to bring the object to
the desired end pose. This sequence of primitives is translated into motions of
the robot to move the object held by the end-effector. We use a dual arm robot
with parallel grippers to test our method on a real system and show successful
planning and execution of in-hand manipulation
Plan-Guided Reinforcement Learning for Whole-Body Manipulation
Synthesizing complex whole-body manipulation behaviors has fundamental
challenges due to the rapidly growing combinatorics inherent to contact
interaction planning. While model-based methods have shown promising results in
solving long-horizon manipulation tasks, they often work under strict
assumptions, such as known model parameters, oracular observation of the
environment state, and simplified dynamics, resulting in plans that cannot
easily transfer to hardware. Learning-based approaches, such as imitation
learning (IL) and reinforcement learning (RL), have been shown to be robust
when operating over in-distribution states; however, they need heavy human
supervision. Specifically, model-free RL requires a tedious reward-shaping
process. IL methods, on the other hand, rely on human demonstrations that
involve advanced teleoperation methods. In this work, we propose a plan-guided
reinforcement learning (PGRL) framework to combine the advantages of
model-based planning and reinforcement learning. Our method requires minimal
human supervision because it relies on plans generated by model-based planners
to guide the exploration in RL. In exchange, RL derives a more robust policy
thanks to domain randomization. We test this approach on a whole-body
manipulation task on Punyo, an upper-body humanoid robot with compliant,
air-filled arm coverings, to pivot and lift a large box. Our preliminary
results indicate that the proposed methodology is promising to address
challenges that remain difficult for either model- or learning-based strategies
alone.Comment: 4 pages, 4 figure
DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity
The sense of touch is an essential ability for skillfully performing a
variety of tasks, providing the capacity to search and manipulate objects
without relying on visual information. Extensive research has been conducted
over time to apply these human tactile abilities to robots. In this paper, we
introduce a multi-finger robot system designed to search for and manipulate
objects using the sense of touch without relying on visual information.
Randomly located target objects are searched using tactile sensors, and the
objects are manipulated for tasks that mimic daily-life. The objective of the
study is to endow robots with human-like tactile capabilities. To achieve this,
binary tactile sensors are implemented on one side of the robot hand to
minimize the Sim2Real gap. Training the policy through reinforcement learning
in simulation and transferring the trained policy to the real environment, we
demonstrate that object search and manipulation using tactile sensors is
possible even in an environment without vision information. In addition, an
ablation study was conducted to analyze the effect of tactile information on
manipulative tasks. Our project page is available at
https://lee-kangwon.github.io/dextouch/Comment: Project page: https://lee-kangwon.github.io/dextouch
Manipulation Planning Using Environmental Contacts to Keep Objects Stable under External Forces
This paper addresses the problem of sequential manipulation planning to keep an object stable under changing external forces. Particularly, we focus on using object-environment contacts. We present a planning algorithm which can generate robot configurations and motions to intelligently use object-environment, as well as object-robot, contacts, to keep an object stable under forceful operations such as drilling and cutting. Given a sequence of external forces, the planner minimizes the number of different configurations used to keep the object stable. An important computational bottleneck in this algorithm is due to the static stability analysis of a large number of configurations. We propose a containment relationship between configurations, to prune the stability checking process