1,613 research outputs found

    Folding Assembly by Means of Dual-Arm Robotic Manipulation

    Full text link
    In this paper, we consider folding assembly as an assembly primitive suitable for dual-arm robotic assembly, that can be integrated in a higher level assembly strategy. The system composed by two pieces in contact is modelled as an articulated object, connected by a prismatic-revolute joint. Different grasping scenarios were considered in order to model the system, and a simple controller based on feedback linearisation is proposed, using force torque measurements to compute the contact point kinematics. The folding assembly controller has been experimentally tested with two sample parts, in order to showcase folding assembly as a viable assembly primitive.Comment: 7 pages, accepted for ICRA 201

    Asymmetric Dual-Arm Task Execution using an Extended Relative Jacobian

    Full text link
    Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse Kinematics algorithm.Comment: Accepted for presentation at ISRR19. 16 Page

    Hydrogen Fuel Cell Gasket Handling and Sorting With Machine Vision Integrated Dual Arm Robot

    Get PDF
    Recently demonstrated robotic assembling technologies for fuel cell stacks used fuel cell components manually pre-arranged in stacks (presenters), all oriented in the same position. Identifying the original orientation of fuel cell components and loading them in stacks for a subsequent automated assembly process is a difficult, repetitive work cycle which if done manually, deceives the advantages offered by automated fabrication technologies of fuel cell components and by robotic assembly processes. We present an innovative robotic technology which enables the integration of automated fabrication processes of fuel cell components with robotic assembly of fuel cell stacks into a fully automated fuel cell manufacturing line. This task, which has not been addressed in the past uses a Yaskawa Motoman SDA5F dual arm robot with integrated machine vision system. The process is used to identify and grasp randomly placed, slightly asymmetric fuel cell components having a total alpha-plus-beta symmetry angle of 720o, to reorient them all in the same position and stack them in presenters for a subsequent robotic assembly process. The dual arm robot technology is selected for increased productivity and ease of gasket handling during reorientation. The initial position and orientation of the gaskets is identified by image analysis using a Cognex machine vision system with fixed camera. The process was demonstrated as part of a larger endeavor of bringing to readiness advanced manufacturing technologies for alternative energy systems, and responds the high priority needs identified by the U.S. Department of Energy for fuel cells manufacturing research and development

    FabricFolding: Learning Efficient Fabric Folding without Expert Demonstrations

    Full text link
    Autonomous fabric manipulation is a challenging task due to complex dynamics and potential self-occlusion during fabric handling. An intuitive method of fabric folding manipulation first involves obtaining a smooth and unfolded fabric configuration before the folding process begins. However, the combination of quasi-static actions such as pick & place and dynamic action like fling proves inadequate in effectively unfolding long-sleeved T-shirts with sleeves mostly tucked inside the garment. To address this limitation, this paper introduces an improved quasi-static action called pick & drag, specifically designed to handle this type of fabric configuration. Additionally, an efficient dual-arm manipulation system is designed in this paper, which combines quasi-static (including pick & place and pick & drag) and dynamic fling actions to flexibly manipulate fabrics into unfolded and smooth configurations. Subsequently, keypoints of the fabric are detected, enabling autonomous folding. To address the scarcity of publicly available keypoint detection datasets for real fabric, we gathered images of various fabric configurations and types in real scenes to create a comprehensive keypoint dataset for fabric folding. This dataset aims to enhance the success rate of keypoint detection. Moreover, we evaluate the effectiveness of our proposed system in real-world settings, where it consistently and reliably unfolds and folds various types of fabrics, including challenging situations such as long-sleeved T-shirts with most parts of sleeves tucked inside the garment. Specifically, our method achieves a coverage rate of 0.822 and a success rate of 0.88 for long-sleeved T-shirts folding

    A survey of robot manipulation in contact

    Get PDF
    In this survey, we present the current status on robots performing manipulation tasks that require varying contact with the environment, such that the robot must either implicitly or explicitly control the contact force with the environment to complete the task. Robots can perform more and more manipulation tasks that are still done by humans, and there is a growing number of publications on the topics of (1) performing tasks that always require contact and (2) mitigating uncertainty by leveraging the environment in tasks that, under perfect information, could be performed without contact. The recent trends have seen robots perform tasks earlier left for humans, such as massage, and in the classical tasks, such as peg-in-hole, there is a more efficient generalization to other similar tasks, better error tolerance, and faster planning or learning of the tasks. Thus, in this survey we cover the current stage of robots performing such tasks, starting from surveying all the different in-contact tasks robots can perform, observing how these tasks are controlled and represented, and finally presenting the learning and planning of the skills required to complete these tasks

    Bimanual robotic manipulation based on potential fields

    Get PDF
    openDual manipulation is a natural skill for humans but not so easy to achieve for a robot. The presence of two end effectors implies the need to consider the temporal and spatial constraints they generate while moving together. Consequently, synchronization between the arms is required to perform coordinated actions (e.g., lifting a box) and to avoid self-collision between the manipulators. Moreover, the challenges increase in dynamic environments, where the arms must be able to respond quickly to changes in the position of obstacles or target objects. To meet these demands, approaches like optimization-based motion planners and imitation learning can be employed but they have limitations such as high computational costs, or the need to create a large dataset. Sampling-based motion planners can be a viable solution thanks to their speed and low computational costs but, in their basic implementation, the environment is assumed to be static. An alternative approach relies on improved Artificial Potential Fields (APF). They are intuitive, with low computational, and, most importantly, can be used in dynamic environments. However, they do not have the precision to perform manipulation actions, and dynamic goals are not considered. This thesis proposes a system for bimanual robotic manipulation based on a combination of improved Artificial Potential Fields (APF) and the sampling-based motion planner RRTConnect. The basic idea is to use improved APF to bring the end effectors near their target goal while reacting to changes in the surrounding environment. Only then RRTConnect is triggered to perform the manipulation task. In this way, it is possible to take advantage of the strengths of both methods. To improve this system APF have been extended to consider dynamic goals and a self-collision avoidance system has been developed. The conducted experiments demonstrate that the proposed system adeptly responds to changes in the position of obstacles and target objects. Moreover, the self-collision avoidance system enables faster dual manipulation routines compared to sequential arm movements

    Asymmetric Dual-Arm Task Execution Using an\ua0Extended Relative Jacobian

    Get PDF
    Coordinated dual-arm manipulation tasks can be broadly characterized as possessing absolute and relative motion components. Relative motion tasks, in particular, are inherently redundant in the way they can be distributed between end-effectors. In this work, we analyse cooperative manipulation in terms of the asymmetric resolution of relative motion tasks. We discuss how existing approaches enable the asymmetric execution of a relative motion task, and show how an asymmetric relative motion space can be defined. We leverage this result to propose an extended relative Jacobian to model the cooperative system, which allows a user to set a concrete degree of asymmetry in the task execution. This is achieved without the need for prescribing an absolute motion target. Instead, the absolute motion remains available as a functional redundancy to the system. We illustrate the properties of our proposed Jacobian through numerical simulations of a novel differential Inverse\ua0Kinematics algorithm

    Autonomous vision-guided bi-manual grasping and manipulation

    Get PDF
    This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks
    • …
    corecore