20 research outputs found
More than a Million Ways to Be Pushed: A High-Fidelity Experimental Dataset of Planar Pushing
Pushing is a motion primitive useful to handle objects that are too large,
too heavy, or too cluttered to be grasped. It is at the core of much of robotic
manipulation, in particular when physical interaction is involved. It seems
reasonable then to wish for robots to understand how pushed objects move.
In reality, however, robots often rely on approximations which yield models
that are computable, but also restricted and inaccurate. Just how close are
those models? How reasonable are the assumptions they are based on? To help
answer these questions, and to get a better experimental understanding of
pushing, we present a comprehensive and high-fidelity dataset of planar pushing
experiments. The dataset contains timestamped poses of a circular pusher and a
pushed object, as well as forces at the interaction.We vary the push
interaction in 6 dimensions: surface material, shape of the pushed object,
contact position, pushing direction, pushing speed, and pushing acceleration.
An industrial robot automates the data capturing along precisely controlled
position-velocity-acceleration trajectories of the pusher, which give dense
samples of positions and forces of uniform quality.
We finish the paper by characterizing the variability of friction, and
evaluating the most common assumptions and simplifications made by models of
frictional pushing in robotics.Comment: 8 pages, 10 figure
Exploitation of environmental constraints in human and robotic grasping
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugÀnglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We investigate the premise that robust grasping performance is enabled by exploiting constraints present in the environment. These constraints, leveraged through motion in contact, counteract uncertainty in state variables relevant to grasp success. Given this premise, grasping becomes a process of successive exploitation of environmental constraints, until a successful grasp has been established. We present support for this view found through the analysis of human grasp behavior and by showing robust robotic grasping based on constraint-exploiting grasp strategies. Furthermore, we show that it is possible to design robotic hands with inherent capabilities for the exploitation of environmental constraints
Tactile-based Object Retrieval From Granular Media
We introduce GEOTACT, a robotic manipulation method capable of retrieving
objects buried in granular media. This is a challenging task due to the need to
interact with granular media, and doing so based exclusively on tactile
feedback, since a buried object can be completely hidden from vision. Tactile
feedback is in itself challenging in this context, due to ubiquitous contact
with the surrounding media, and the inherent noise level induced by the tactile
readings. To address these challenges, we use a learning method trained
end-to-end with simulated sensor noise. We show that our problem formulation
leads to the natural emergence of learned pushing behaviors that the
manipulator uses to reduce uncertainty and funnel the object to a stable grasp
despite spurious and noisy tactile readings. We also introduce a training
curriculum that enables learning these behaviors in simulation, followed by
zero-shot transfer to real hardware. To the best of our knowledge, GEOTACT is
the first method to reliably retrieve a number of different objects from a
granular environment, doing so on real hardware and with integrated tactile
sensing. Videos and additional information can be found at
https://jxu.ai/geotact
Robotic manipulation of multiple objects as a POMDP
This paper investigates manipulation of multiple unknown objects in a crowded
environment. Because of incomplete knowledge due to unknown objects and
occlusions in visual observations, object observations are imperfect and action
success is uncertain, making planning challenging. We model the problem as a
partially observable Markov decision process (POMDP), which allows a general
reward based optimization objective and takes uncertainty in temporal evolution
and partial observations into account. In addition to occlusion dependent
observation and action success probabilities, our POMDP model also
automatically adapts object specific action success probabilities. To cope with
the changing system dynamics and performance constraints, we present a new
online POMDP method based on particle filtering that produces compact policies.
The approach is validated both in simulation and in physical experiments in a
scenario of moving dirty dishes into a dishwasher. The results indicate that:
1) a greedy heuristic manipulation approach is not sufficient, multi-object
manipulation requires multi-step POMDP planning, and 2) on-line planning is
beneficial since it allows the adaptation of the system dynamics model based on
actual experience
Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications
Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.This Project received funding from the European Unionâs Horizon 2020 research and Innovation Programme under grant agreement No. 780488
Sensorless Motion Planning for Medical Needle Insertion in Deformable Tissues
Minimally invasive medical procedures such as biopsies, anesthesia drug injections, and brachytherapy cancer treatments require inserting a needle to a specific target inside soft tissues. This is difficult because needle insertion displaces and deforms the surrounding soft tissues causing the target to move during the procedure. To facilitate physician training and preoperative planning for these procedures, we develop a needle insertion motion planning system based on an interactive simulation of needle insertion in deformable tissues and numerical optimization to reduce placement error. We describe a 2-D physically based, dynamic simulation of needle insertion that uses a finite-element model of deformable soft tissues and models needle cutting and frictional forces along the needle shaft. The simulation offers guarantees on simulation stability for mesh modications and achieves interactive, real-time performance on a standard PC. Using texture mapping, the simulation provides visualization comparable to ultrasound images that the physician would see during the procedure. We use the simulation as a component of a sensorless planning algorithm that uses numerical optimization to compute needle insertion offsets that compensate for tissue deformations. We apply the method to radioactive seed implantation during permanent seed prostate brachytherapy to minimize seed placement error
Orienting Deformable Polygonal Parts without Sensors
Parts orienting is an important part of automated manufacturing. Sensorless manipulation has proven to be a useful paradigm in addressing parts orienting, and the manipulation of deformable objects is a growing area of interest. Until now, these areas have remained separate because existing orienting approaches utilize forces that if applied to deformable parts violate the assumptions used by existing algorithms, and could potentially break the part. We introduce a new algorithm and manipulator actions that, when provided with the geometric description and a deformation model of choice for the part, exploits the deformation and generates a Plan that consists of the shortest sequence of manipulator actions guaranteed to orient the part up to symmetry from any unknown initial orientation and pose. Additionally, the algorithm estimates whether a given manipulator is sufficiently precise to perform the actions which guarantee the final orientation. This is dictated by the particular part geometry, deformation model, and the manipulator action path planner which contains simple end-effector constraints and any standard motion planner. We illustrate the success of the algorithm with multiple parts through 192 trials of experiments that were performed with low-precision robot manipulators and six parts made of four types of materials. The experimental trials resulted in 154 successes, which show the feasibility of deformable parts orienting. The analysis of the failures showed that for success the assumptions of zero friction are essential for this work, increased manipulator precision would be beneficial but not necessary, and a simple deformation model can be sufficient. Finally, we note that the algorithm has applications to truly sensorless manipulation of non-deformable parts
Recommended from our members
A configuration space toolkit for automated spatial reasoning: Technical results and LDRD project final report
A robot`s configuration space (c-space) is the space of its kinematic degrees of freedom, e.g., the joint-space of an arm. Sets in c-space can be defined that characterize a variety of spatial relationships, such as contact between the robot and its environment. C-space techniques have been fundamental to research progress in areas such as motion planning and physically-based reasoning. However, practical progress has been slowed by the difficulty of implementing the c-space abstraction inside each application. For this reason, we proposed a Configuration Space Toolkit of high-performance algorithms and data structures meeting these needs. Our intent was to develop this robotics software to provide enabling technology to emerging applications that apply the c-space abstraction, such as advanced motion planning, teleoperation supervision, mechanism functional analysis, and design tools. This final report presents the research results and technical achievements of this LDRD project. Key results and achievements included (1) a hybrid Common LISP/C prototype that implements the basic C-Space abstraction, (2) a new, generic, algorithm for constructing hierarchical geometric representations, and (3) a C++ implementation of an algorithm for fast distance computation, interference detection, and c-space point-classification. Since the project conclusion, motion planning researchers in Sandia`s Intelligent Systems and Robotics Center have been using the CSTk libcstk.so C++ library. The code continues to be used, supported, and improved by projects in the ISRC