44 research outputs found

    Vision Guided Force Control in Robotics

    Get PDF
    One way to increase the flexibility of industrial robots in manipulation tasks is to integrate additional sensors in the control systems. Cameras are an example of such sensors, and in recent years there has been an increased interest in vision based control. However, it is clear that most manipulation tasks can not be solved using position control alone, because of the risk of excessive contact forces. Therefore, it would be interesting to combine vision based position control with force feedback. In this thesis, we present a method for combining direct force control and visual servoing in the presence of unknown planar surfaces. The control algorithm involves a force feedback control loop and a vision based reference trajectory as a feed-forward signal. The vision system is based on a constrained image-based visual servoing algorithm, using an explicit 3D-reconstruction of the planar constraint surface. We show how calibration data calculated by a simple but efficient camera calibration method can be used in combination with force and position data to improve the reconstruction and reference trajectories. The task chosen involves force controlled drawing on an unknown surface. The robot will grasp a pen using visual servoing, and use the pen to draw lines between a number of points on a whiteboard. The force control will keep the contact force constant during the drawing. The method is validated through experiments carried out on a 6-degree-of-freedom ABB Industrial Robot 2000

    Dedicated and industrial robotic arms used as force feedback telerobots at the AREVA-La Hague recycling plant

    Get PDF
    ISBN: 978-1-4244-6635-1/10International audienceCEA LIST and AREVA have been developing remote operations devices, also called telerobotics for 15 years. These tools were designed for interventions in the AREVA nuclear spent fuel facilities hot cells. From these 15 years of joint research and development, several technological bricks have been industrialized and used at the AREVA La Hague facilities. This article presents some of these bricks and their industrial developments. The “TAO2000” CEA LIST telerobotics generic software controller will be first discussed. This controller has been used to teleoperate dedicated slave arms like the MT200 TAO (an evolution of the conventional wall-transmission mechanical telemanipulator (MSM)) as well as industrial robotic arms like the Stäubli RX robots. Both the MT200 TAO and Stäubli RX TAO telerobotics systems provide force-feedback and are now ready to be used as telemaintenance tools at the AREVA La Hague facilities. Two recent maintenance operations using these tools will be detailed at the end of this pape

    Enhancing the Command-Following Bandwidth for Transparent Bilateral Teleoperation

    Get PDF
    © 2018 IEEE. Enhancing transparency of a teleoperation system by increasing the command-following bandwidth has not received lots of attention so far. This is considered a challenging task since in a teleoperation system the command-following bandwidth of the slave robot motion controller cannot be increased with a conventional motion controller as the desired trajectory is instantaneously commanded by the human user and thus, cannot be considered to be given in a pre-computed, smooth second order derivative form. We propose a method to increase the command-following bandwidth by extending the previously introduced Successive Stiffness Increment (SSI) approach to bilateral teleoperation. The approach allows realizing a very high motion controller gain, which cannot be realized with a conventional bilateral teleoperation controller as confirmed by experimental results

    Force Controlled Assembly of Emergency Stop Button

    Get PDF
    Modern industrial robots are fast and have very good repetitional accuracy, which have made them indispensable in many manufacturing applications. However, they are usually programmed to follow desired trajectories and only get feedback from position sensors. This works fine as long as the environment is very well structured, but does not give good robustness to objects not being positioned or gripped accurately. A solution is to use additional sensing, such as force sensors and vision. How to combine the data from the different sensors and use it in a good way to control the robot is still an area of research. This paper describes an assembly scenario where a switch should be snapped into place in a box. Force sensing is used to resolve the uncertain position of the parts and detect the snap at the end of the operation. During the assembly an uncertain distance is estimated to improve the performance. By performing the assembly several times, learning is used to generate feed-forward data, which is used to speed up the assembly

    Modelbased Visual Servoing Grasping of Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems are traditionally closed system. With the aid of vision, visual feedback is used to guide the robot manipulator to the target in a similar manner as humans do. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamics in motion, a model of this dynamical behaviour is required in order for the robot to predict or track the target trajectory and intercept the target successfully. One the necessary modeling is done, the framework becomes one of automatic control. >p In this master thesis, we present a model-based visual servoing of a six degree-of-freedom (DOF) industrial robot in the manner of computer simulation. The objective of this thesis is to manoeuvre the robot to grasp a ball moving by Newtonian dynamics in an unattended and less structured three-dimensional environment. >p Two digital cameras are used cooperatively to capture images of the ball for computer vision system to generate qualitative visual information. The accuracy of the visual information is essential to the robotic servoing control. The computer vision system detects the ball in image space, segments the ball from the background and computes the ball in image space as visual information. The visual information is used for 3D reconstruction of the ball in Cartesian space. The trajectory of the thrown ball is then modeled and predicted. Several ball grasp positions in Cartesian space are predicted as the thrown ball travelling towards the robot. At that same time, the inverse kinematics of the robot is also computed and it steers the robot to track the predicted ball grasp positions and grasp the ball when the error is small. In addition, the performance and robustness of this model-based prediction of the ball trajectory is verified with graphical analysis

    Flexible collaborative transportation by a team of rotorcraft

    Full text link
    We propose a combined method for the collaborative transportation of a suspended payload by a team of rotorcraft. A recent distance-based formation-motion control algorithm based on assigning distance disagreements among robots generates the acceleration signals to be tracked by the vehicles. In particular, the proposed method does not need global positions nor tracking prescribed trajectories for the motion of the members of the team. The acceleration signals are followed accurately by an Incremental Nonlinear Dynamic Inversion controller designed for rotorcraft that measures and resists the tensions from the payload. Our approach allows us to analyze the involved accelerations and forces in the system so that we can calculate the worst case conditions explicitly to guarantee a nominal performance, provided that the payload starts at rest in the 2D centroid of the formation, and it is not under significant disturbances. For example, we can calculate the maximum safe deformation of the team with respect to its desired shape. We demonstrate our method with a team of four rotorcraft carrying a suspended object two times heavier than the maximum payload for an individual. Last but not least, our proposed algorithm is available for the community in the open-source autopilot Paparazzi.Comment: ICRA 2019, 6+1 page
    corecore