1,242 research outputs found

    A model-based residual approach for human-robot collaboration during manual polishing operations

    Get PDF
    A fully robotized polishing of metallic surfaces may be insufficient in case of parts with complex geometric shapes, where a manual intervention is still preferable. Within the EU SYMPLEXITY project, we are considering tasks where manual polishing operations are performed in strict physical Human-Robot Collaboration (HRC) between a robot holding the part and a human operator equipped with an abrasive tool. During the polishing task, the robot should firmly keep the workpiece in a prescribed sequence of poses, by monitoring and resisting to the external forces applied by the operator. However, the user may also wish to change the orientation of the part mounted on the robot, simply by pushing or pulling the robot body and changing thus its configuration. We propose a control algorithm that is able to distinguish the external torques acting at the robot joints in two components, one due to the polishing forces being applied at the end-effector level, the other due to the intentional physical interaction engaged by the human. The latter component is used to reconfigure the manipulator arm and, accordingly, its end-effector orientation. The workpiece position is kept instead fixed, by exploiting the intrinsic redundancy of this subtask. The controller uses a F/T sensor mounted at the robot wrist, together with our recently developed model-based technique (the residual method) that is able to estimate online the joint torques due to contact forces/torques applied at any place along the robot structure. In order to obtain a reliable residual, which is necessary to implement the control algorithm, an accurate robot dynamic model (including also friction effects at the joints and drive gains) needs to be identified first. The complete dynamic identification and the proposed control method for the human-robot collaborative polishing task are illustrated on a 6R UR10 lightweight manipulator mounting an ATI 6D sensor

    Walk-through programming for robotic manipulators based on admittance control

    Get PDF
    The present paper addresses the issues that should be covered in order to develop walk-through programming techniques (i.e. a manual guidance of the robot) in an industrial scenario. First, an exact formulation of the dynamics of the tool the human should feel when interacting with the robot is presented. Then, the paper discusses a way to implement such dynamics on an industrial robot equipped with an open robot control system and a wrist force/torque sensor, as well as the safety issues related to the walk-through programming. In particular, two strategies that make use of admittance control to constrain the robot motion are presented. One slows down the robot when the velocity of the tool centre point exceeds a specified safety limit, the other one limits the robot workspace by way of virtual safety surfaces. Experimental results on a COMAU Smart Six robot are presented, showing the performance of the walk-through programming system endowed with the two proposed safety strategies

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    Collision Detection and Reaction: A Contribution to Safe Physical Human-Robot Interaction

    Get PDF
    In the framework of physical Human-Robot Interaction (pHRI), methodologies and experimental tests are presented for the problem of detecting and reacting to collisions between a robot manipulator and a human being. Using a lightweight robot that was especially designed for interactive and cooperative tasks, we show how reactive control strategies can significantly contribute to ensuring safety to the human during physical interaction. Several collision tests were carried out, illustrating the feasibility and effectiveness of the proposed approach. While a subjective “safety” feeling is experienced by users when being able to naturally stop the robot in autonomous motion, a quantitative analysis of different reaction strategies was lacking. In order to compare these strategies on an objective basis, a mechanical verification platform has been built. The proposed collision detection and reactions methods prove to work very reliably and are effective in reducing contact forces far below any level which is dangerous to humans. Evaluations of impacts between robot and human arm or chest up to a maximum robot velocity of 2.7 m/s are presented

    Kinova modular robot arms for service robotics applications

    Get PDF
    This article presents Kinova's modular robotic systems, including the robots JACO2 and MICO2, actuators and grippers. Kinova designs and manufactures robotics platforms and components that are simple, sexy and safe under two business units: Assistive Robotics empowers people living with disabilities to push beyond their current boundaries and limitations while Service Robotics empowers people in industry to interact with their environment more efficiently and safely. Kinova is based in Boisbriand, Québec, Canada. Its technologies are exploited in over 25 countries and are used in many applications, including as service robotics, physical assistance, medical applications, mobile manipulation, rehabilitation, teleoperation and in research in different areas such as computer vision, artificial intelligence, grasping, planning and control interfaces. The article describes Kinova's hardware platforms, their different control modes (position, velocity and torque), control features and possible control interfaces. Integration to other systems and application examples are also presented

    Trajectory Generation for Assembly Tasks Via Bilateral Teleoperation

    Get PDF
    Abstract in UndeterminedFor assembly tasks, the knowledge of both trajectory and forces are usually required. Consequently, we may use kinesthetics or teleoperation for recording human demonstrations. In order to have a more natural interaction, the operator has to be provided with a sense of touch. We propose a bilateral teleoperation system which is customized for this purpose. We introduce different coordinate frames to make the design of a 6-DOF teleoperation straightforward. Moreover, we suggest using tele-admittance, which simplifies instructing the robot. The compliance due to the slave controller allows the robot to react quickly and reduces the risk of damaging the workpiece

    Toward Future Automatic Warehouses: An Autonomous Depalletizing System Based on Mobile Manipulation and 3D Perception

    Get PDF
    This paper presents a mobile manipulation platform designed for autonomous depalletizing tasks. The proposed solution integrates machine vision, control and mechanical components to increase flexibility and ease of deployment in industrial environments such as warehouses. A collaborative robot mounted on a mobile base is proposed, equipped with a simple manipulation tool and a 3D in-hand vision system that detects parcel boxes on a pallet, and that pulls them one by one on the mobile base for transportation. The robot setup allows to avoid the cumbersome implementation of pick-and-place operations, since it does not require lifting the boxes. The 3D vision system is used to provide an initial estimation of the pose of the boxes on the top layer of the pallet, and to accurately detect the separation between the boxes for manipulation. Force measurement provided by the robot together with admittance control are exploited to verify the correct execution of the manipulation task. The proposed system was implemented and tested in a simplified laboratory scenario and the results of experimental trials are reported
    corecore