1,156 research outputs found
Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary
The complex physical properties of highly deformable materials such as
clothes pose significant challenges fanipulation systems. We present a novel
visual feedback dictionary-based method for manipulating defoor autonomous
robotic mrmable objects towards a desired configuration. Our approach is based
on visual servoing and we use an efficient technique to extract key features
from the RGB sensor stream in the form of a histogram of deformable model
features. These histogram features serve as high-level representations of the
state of the deformable material. Next, we collect manipulation data and use a
visual feedback dictionary that maps the velocity in the high-dimensional
feature space to the velocity of the robotic end-effectors for manipulation. We
have evaluated our approach on a set of complex manipulation tasks and
human-robot manipulation tasks on different cloth pieces with varying material
characteristics.Comment: The video is available at goo.gl/mDSC4
Autonomous clothes manipulation using a hierarchical vision architecture
This paper presents a novel robot vision architecture for perceiving generic 3-D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvature features to mid-level geometric shapes and topology descriptions, and finally, high-level semantic surface descriptions. We demonstrate our robot vision architecture in a customized dual-arm industrial robot with our inhouse developed stereo vision system, carrying out autonomous grasping and dual-arm flattening. The experimental results show the effectiveness of the proposed dual-arm flattening using the stereo vision system compared with the single-arm flattening using the widely cited Kinect-like sensor as the baseline. In addition, the proposed grasping approach achieves satisfactory performance when grasping various kind of garments, verifying the capability of the proposed visual perception architecture to be adapted to more than one clothing manipulation tasks
Ground Robotic Hand Applications for the Space Program study (GRASP)
This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time
A Grasping-centered Analysis for Cloth Manipulation
Compliant and soft hands have gained a lot of attention in the past decade
because of their ability to adapt to the shape of the objects, increasing their
effectiveness for grasping. However, when it comes to grasping highly flexible
objects such as textiles, we face the dual problem: it is the object that will
adapt to the shape of the hand or gripper. In this context, the classic grasp
analysis or grasping taxonomies are not suitable for describing textile objects
grasps. This work proposes a novel definition of textile object grasps that
abstracts from the robotic embodiment or hand shape and recovers concepts from
the early neuroscience literature on hand prehension skills. This framework
enables us to identify what grasps have been used in literature until now to
perform robotic cloth manipulation, and allows for a precise definition of all
the tasks that have been tackled in terms of manipulation primitives based on
regrasps. In addition, we also review what grippers have been used. Our
analysis shows how the vast majority of cloth manipulations have relied only on
one type of grasp, and at the same time we identify several tasks that need
more variety of grasp types to be executed successfully. Our framework is
generic, provides a classification of cloth manipulation primitives and can
inspire gripper design and benchmark construction for cloth manipulation.Comment: 13 pages, 4 figures, 4 tables. Accepted for publication at IEEE
Transactions on Robotic
A Framework for Designing Anthropomorphic Soft Hands through Interaction
Modeling and simulating soft robot hands can aid in design iteration for
complex and high degree-of-freedom (DoF) morphologies. This can be further
supplemented by iterating on the design based on its performance in real world
manipulation tasks. However, this requires a framework that allows us to
iterate quickly at low costs. In this paper, we present a framework that
leverages rapid prototyping of the hand using 3D-printing, and utilizes
teleoperation to evaluate the hand in real world manipulation tasks. Using this
framework, we design a 3D-printed 16-DoF dexterous anthropomorphic soft hand
(DASH) and iteratively improve its design over three iterations. Rapid
prototyping techniques such as 3D-printing allow us to directly evaluate the
fabricated hand without modeling it in simulation. We show that the design is
improved at each iteration through the hand's performance in 30 real-world
teleoperated manipulation tasks. Testing over 600 demonstrations shows that our
final version of DASH can solve 16 of the 30 tasks compared to Allegro, a
popular rigid hand in the market, which can only solve 7 tasks. We open-source
our CAD models as well as the teleoperated dataset for further study and are
available on our website (https://dash-through-interaction.github.io.
- …