313 research outputs found

    Grasping Unknown Objects Based on Gripper Workspace Spheres

    Get PDF
    In this paper, we present a novel grasp planning algorithm for unknown objects given a registered point cloud of the target from different views. The proposed methodology requires no prior knowledge of the object, nor offline learning. In our approach, the gripper kinematic model is used to generate a point cloud of each finger workspace, which is then filled with spheres. At run-time, first the object is segmented, its major axis is computed, in a plane perpendicular to which, the main grasping action is constrained. The object is then uniformly sampled and scanned for various gripper poses that assure at least one object point is located in the workspace of each finger. In addition, collision checks with the object or the table are performed using computationally inexpensive gripper shape approximation. Our methodology is both time efficient (consumes less than 1.5 seconds in average) and versatile. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand)

    Enhancing Grasp Pose Computation in Gripper Workspace Spheres

    Get PDF
    In this paper, enhancement to the novel grasp planning algorithm based on gripper workspace spheres is presented. Our development requires a registered point cloud of the target from different views, assuming no prior knowledge of the object, nor any of its properties. This work features a new set of metrics for grasp pose candidates evaluation, as well as exploring the impact of high object sampling on grasp success rates. In addition to gripper position sampling, we now perform orientation sampling about the x, y, and z-axes, hence the grasping algorithm no longer require object orientation estimation. Successful experiments have been conducted on a simple jaw gripper (Franka Panda gripper) as well as a complex, high Degree of Freedom (DoF) hand (Allegro hand) as a proof of its versatility. Higher grasp success rates of 76% and 85.5% respectively has been reported by real world experiments

    Dynamic grasping of objects with a high-speed parallel robot

    Get PDF
    Underactuated grippers aim to simplify the control strategies for performing stable grasps due to their inherent shape adaptability. While at the beginning, the main research area was focused on developing human-like robotic hands for disabled people, in the last years, a new eld of application appeared with the constant evolution of the industry: the implementation of a single underactuated gripper as a replacement of diverse dedicated fully-actuated grippers. However, two main issues are restraining its use: the stability of the grasp and the speed of performance. The rst is an active topic as all underactuated grippers need to ensure the stability of the grasped object through an adequate kinematic design, while, the latter is not widely treated as there weren't many application elds where high-speed was required and, at the end, the quasi-static analysis must be also ensured. For this reason, the present research work has been focused on the speed of the grasping. In the rst place, an introduction to underactuated hands is made, and is followed by two main stability criteria. Then, the development of a model for an underactuated nger that allows analyzing the complete grasping sequence at high-speed along with a collision model are presented. Following, a design-based analysis to simplify the model is performed, and the graspstate volume tool is introduced in order to inspect the impact of the design variables on the proposed criteria. In the last chapter, an optimization over the design space is performed and a design is chosen, crosschecked with ADAMS software and prototyped. Finally, an overview remarking the strengths and gaps in the research is presented in the form of conclusions, and closing them, future works that could be interesting to develop

    Robotic Grasping of Large Objects for Collaborative Manipulation

    Get PDF
    In near future, robots are envisioned to work alongside humans in professional and domestic environments without significant restructuring of workspace. Robotic systems in such setups must be adept at observation, analysis and rational decision making. To coexist in an environment, humans and robots will need to interact and cooperate for multiple tasks. A fundamental such task is the manipulation of large objects in work environments which requires cooperation between multiple manipulating agents for load sharing. Collaborative manipulation has been studied in the literature with the focus on multi-agent planning and control strategies. However, for a collaborative manipulation task, grasp planning also plays a pivotal role in cooperation and task completion. In this work, a novel approach is proposed for collaborative grasping and manipulation of large unknown objects. The manipulation task was defined as a sequence of poses and expected external wrench acting on the target object. In a two-agent manipulation task, the proposed approach selects a grasp for the second agent after observing the grasp location of the first agent. The solution is computed in a way that it minimizes the grasp wrenches by load sharing between both agents. To verify the proposed methodology, an online system for human-robot manipulation of unknown objects was developed. The system utilized depth information from a fixed Kinect sensor for perception and decision making for a human-robot collaborative lift-up. Experiments with multiple objects substantiated that the proposed method results in an optimal load sharing despite limited information and partial observability

    Workshop on "Robotic assembly of 3D MEMS".

    No full text
    Proceedings of a workshop proposed in IEEE IROS'2007.The increase of MEMS' functionalities often requires the integration of various technologies used for mechanical, optical and electronic subsystems in order to achieve a unique system. These different technologies have usually process incompatibilities and the whole microsystem can not be obtained monolithically and then requires microassembly steps. Microassembly of MEMS based on micrometric components is one of the most promising approaches to achieve high-performance MEMS. Moreover, microassembly also permits to develop suitable MEMS packaging as well as 3D components although microfabrication technologies are usually able to create 2D and "2.5D" components. The study of microassembly methods is consequently a high stake for MEMS technologies growth. Two approaches are currently developped for microassembly: self-assembly and robotic microassembly. In the first one, the assembly is highly parallel but the efficiency and the flexibility still stay low. The robotic approach has the potential to reach precise and reliable assembly with high flexibility. The proposed workshop focuses on this second approach and will take a bearing of the corresponding microrobotic issues. Beyond the microfabrication technologies, performing MEMS microassembly requires, micromanipulation strategies, microworld dynamics and attachment technologies. The design and the fabrication of the microrobot end-effectors as well as the assembled micro-parts require the use of microfabrication technologies. Moreover new micromanipulation strategies are necessary to handle and position micro-parts with sufficiently high accuracy during assembly. The dynamic behaviour of micrometric objects has also to be studied and controlled. Finally, after positioning the micro-part, attachment technologies are necessary

    Implementation and testing of point cloud based grasping algorithms for objetct picking

    Get PDF
    Treball de Final de Màster Universitari Erasmus Mundus en Robòtica Avançada. Codi: SJD024. Curs acadèmic 2016-2017The purpose of this study is to investigate the most effective methodologies for the grasping of items in an environment where success, robustness and time of the algorithmic computation and its implementation are a key constraint. The study originates from the Amazon Robotics Challenge 2017 (ARC’17) which addresses the problem of automating the picking process in online shopping warehouses. In a real warehouse environment the robot has to deal with restricted visibility and accessibility. The proposed solution to grasping was to retrieve a final position and orientation of the end effector given only sensory information without mesh reconstruction. Two grippers were used: a two finger gripper with a narrow opening width and a vacuum gripper. Antipodal Grasp Identification and Learning (AGILE) and Height Accumulated Features (HAF) methods were chosen for implementation on a two finger gripper due to their ease of applicability, same type of input, and reportedly high success rate. One major contribution of this work was the creation of the Centroid Normals Approach (CNA) method for the vacuum gripper that chooses the most central point cloud grasp location on the flattest part of the object. Since it does not include calculation of orientation, its computation time is faster than the other approaches. It was concluded that CNA should be used on as many objects as possible with both the vacuum gripper and the two finger gripper. A final scheme has been devised to pick up the maximum number of items by combining algorithms on the two different grippers, given the hardware restrictions, to cater to different objects in the challenge

    Innovative robot hand designs of reduced complexity for dexterous manipulation

    Get PDF
    This thesis investigates the mechanical design of robot hands to sensibly reduce the system complexity in terms of the number of actuators and sensors, and control needs for performing grasping and in-hand manipulations of unknown objects. Human hands are known to be the most complex, versatile, dexterous manipulators in nature, from being able to operate sophisticated surgery to carry out a wide variety of daily activity tasks (e.g. preparing food, changing cloths, playing instruments, to name some). However, the understanding of why human hands can perform such fascinating tasks still eludes complete comprehension. Since at least the end of the sixteenth century, scientists and engineers have tried to match the sensory and motor functions of the human hand. As a result, many contemporary humanoid and anthropomorphic robot hands have been developed to closely replicate the appearance and dexterity of human hands, in many cases using sophisticated designs that integrate multiple sensors and actuators---which make them prone to error and difficult to operate and control, particularly under uncertainty. In recent years, several simplification approaches and solutions have been proposed to develop more effective and reliable dexterous robot hands. These techniques, which have been based on using underactuated mechanical designs, kinematic synergies, or compliant materials, to name some, have opened up new ways to integrate hardware enhancements to facilitate grasping and dexterous manipulation control and improve reliability and robustness. Following this line of thought, this thesis studies four robot hand hardware aspects for enhancing grasping and manipulation, with a particular focus on dexterous in-hand manipulation. Namely: i) the use of passive soft fingertips; ii) the use of rigid and soft active surfaces in robot fingers; iii) the use of robot hand topologies to create particular in-hand manipulation trajectories; and iv) the decoupling of grasping and in-hand manipulation by introducing a reconfigurable palm. In summary, the findings from this thesis provide important notions for understanding the significance of mechanical and hardware elements in the performance and control of human manipulation. These findings show great potential in developing robust, easily programmable, and economically viable robot hands capable of performing dexterous manipulations under uncertainty, while exhibiting a valuable subset of functions of the human hand.Open Acces

    Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

    Full text link
    We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation. We aim for general applicability with a generic object detector, a fast grasp selection algorithm and by using a single gripper-mounted RGB-D camera, hence not relying on external sensors. The robot is controlled via visual servoing towards the object of interest. Putting a high emphasis on safety, we use two perception modules: human body part segmentation and hand/finger segmentation. Pixels that are deemed to belong to the human are filtered out from candidate grasp poses, hence ensuring that the robot safely picks the object without colliding with the human partner. The grasp selection and perception modules run concurrently in real-time, which allows monitoring of the progress. In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.Comment: IEEE Robotics and Automation Letters (RA-L). Preprint Version. Accepted September, 2020. The code and videos can be found at https://patrosat.github.io/h2r_handovers
    • …
    corecore