371 research outputs found
Randomized physics-based motion planning for grasping in cluttered and uncertain environments
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksPlanning motions to grasp an object in cluttered and uncertain environments is a challenging task, particularly when a collision-free trajectory does not exist and objects obstructing the way are required to be carefully grasped and moved out. This letter takes a different approach and proposes to address this problem by using a randomized physics-based motion planner that permits robot–object and object–object interactions. The main idea is to avoid an explicit high-level reasoning of the task by providing the
motion planner with a physics engine to evaluate possible complex multibody dynamical interactions. The approach is able to solve the problem in complex scenarios, also considering uncertainty in the objects’ pose and in the contact dynamics. The work enhances the state validity checker, the control sampler, and the tree exploration strategy of a kinodynamic motion planner called KPIECE. The enhanced algorithm, called p-KPIECE, has been validated in simulation and with real experiments. The results have been compared with an ontological physics-based motion planner and with task and motion planning approaches, resulting in a significant improvement in terms of planning time, success rate, and quality of the solution path.Peer ReviewedPostprint (author's final draft
Recommended from our members
A Unified Visual-Haptic Fingertip Sensor For Advanced Robot Dexterity
The problem of robotic grasping and manipulation requires a system level perspective that needs to be aimed at solving the interlinked sub-problems simultaneously. These sub-problems consists of designing an appropriate robot hand, sensing technology, control, and planning strategy, that can increase the dexterity of a robot hand in complex environments. Approaches towards these lack the proper use and integration of tactile feedback that can potentially enable robot hands with far superior capabilities than found today. This thesis addresses this challenge from three aspects: hardware design, system integration, and algorithm development. On the hardware side, it traces the thorough development of a multi and cross-modal tactile sensor that can measure proximity, contact, and force (PCF). Three unique features of the PCF sensor are (i) the ability to measure visual as well as tactile object features, (ii) its low manufacturing cost and (iii) that it can be easily integrated into different type of robot hands. This is achieved by embedding infrared proximity sensing integrated chips in soft elastomer to achieve a multitude of signals. On the system integration side, the thesis manifests the individual importance of the hand design, visual and tactile sensing modalities in the context of robotic manipulation related tasks through careful real-world robotic experiments. On the algorithmic side, it shows the implementation of several algorithms concerning signal processing, computer vision, controls, probabilistic theory and machine learning for experimental evaluation.</p
Study to design and develop remote manipulator system
Modeling of human performance in remote manipulation tasks is reported by automated procedures using computers to analyze and count motions during a manipulation task. Performance is monitored by an on-line computer capable of measuring the joint angles of both master and slave and in some cases the trajectory and velocity of the hand itself. In this way the operator's strategies with different transmission delays, displays, tasks, and manipulators can be analyzed in detail for comparison. Some progress is described in obtaining a set of standard tasks and difficulty measures for evaluating manipulator performance
Recommended from our members
On the Interplay between Mechanical and Computational Intelligence in Robot Hands
Researchers have made tremendous advances in robotic grasping in the past decades. On the hardware side, a lot of robot hand designs were proposed, covering a large spectrum of dexterity (from simple parallel grippers to anthropomorphic hands), actuation (from underactuated to fully actuated), and sensing capabilities (from only open/close states to tactile sensing). On the software side, grasping techniques also evolved significantly, from open-loop control, classical feedback control, to learning-based policies. However, most of the studies and applications follow the one-way paradigm that mechanical engineers/researchers design the hardware first and control/learning experts write the code to use the hand. In contrast, we aim to study the interplay between the mechanical and computational aspects in robotic grasping. We believe both sides are important but cannot solve grasping problems on their own, and both sides are highly connected by the laws of physics and should not be developed separately. We use the term "Mechanical Intelligence" to refer to the ability realized by mechanisms to appropriately respond to the external inputs, and we show that incorporating Mechanical Intelligence with Computational Intelligence is beneficial for grasping.
The first part of this thesis is to derive hand underactuation mechanisms from grasp data. The mechanical coordination in robot hands, which is one type of Mechanical Intelligence, corresponds to the concept of dimensionality reduction in Machine Learning. However, the resulted low-dimensional manifolds need to be realizable using underactuated mechanisms. In this project, we first collect simulated grasp data without accounting for underactuation, apply a dimensionality reduction technique (we term it "Mechanically Realizable Manifolds") considering both pre-contact postural synergies and post-contact joint torque coordination, and finally build robot hands based on the resulted low-dimensional models. We also demonstrate a real-world application on a free-flying robot for the International Space Station.
The second part is about proprioceptive grasping for unknown objects by taking advantage of hand compliance. Mechanical compliance is intrinsically connected to force/torque sensing and control. In this work, we proposed a series-elastic hand providing embodied compliance and proprioception, and an associated grasping policy using a network of proportional-integral controllers. We show that, without any prior model of the object and with only proprioceptive sensing, a robot hand can make stable grasps in a reactive fashion.
The last part is about developing the Mechanical and Computational Intelligence jointly --- to co-optimize the mechanisms and control policies using deep Reinforcement Learning (RL). Traditional RL treats robot hardware as immutable and models it as part of the environment. In contrast, we move the robot hardware out of the environment, express its mechanics as auto-differentiable physics and connect it with the computational policy to create a unified policy (we term this method "Hardware as Policy"), which allows RL algorithms to back-propagate gradients w.r.t both hardware and computational parameters and optimize them in the same fashion. We present a mass-spring toy problem to illustrate this idea, and also a real-world design case of an underactuated hand.
The three projects we present in this thesis are meaningful examples to demonstrate the interplay between the mechanical and computational aspects of robotic grasping. In the Conclusion part, we summarize some high-level philosophies and suggestions to integrate Mechanical and Computational Intelligence, as well as the high-level challenges that still exist when pushing this area forward
- …