91 research outputs found

    A generalised proportional-derivative force/vision controller for torque-driven planar robotic manipulators

    Get PDF
    summary:In this paper, a family of hybrid control algorithms is presented; where it is merged a free camera-calibration image-based control scheme and a direct force controller, both with the same priority level. The aim of this generalised hybrid controller is to regulate the robot-environment interaction into a two-dimensional task-space. The design of the proposed control structure takes into account most of the dynamic effects present in robot manipulators whose inputs are torque signals. As examples of this generalised structure of hybrid force/vision controllers, a linear proportional-derivative structure and a nonlinear proportional-derivative one (based on the hyperbolic tangent function) are presented. The corresponding stability analysis, using Lyapunov's direct method and invariance theory, is performed to proof the asymptotic stability of the equilibrium vector of the closed-loop system. Experimental tests of the control scheme are presented and a suitable performance is observed in all the cases. Unlike most of the previously presented hybrid schemes, the control structure proposed herein achieves soft contact forces without overshoots, fast convergence of force and position error signals, robustness of the controller in the face of some uncertainties (such as camera rotation), and safe operation of the robot actuators when saturating functions (non-linear case) are used in the mathematical structure. This is one of the first works to propose a generalized structure of hybrid force/vision control that includes a closed loop stability analysis for torque-driven robot manipulators

    Learning Haptic-based Object Pose Estimation for In-hand Manipulation Control with Underactuated Robotic Hands

    Full text link
    Unlike traditional robotic hands, underactuated compliant hands are challenging to model due to inherent uncertainties. Consequently, pose estimation of a grasped object is usually performed based on visual perception. However, visual perception of the hand and object can be limited in occluded or partly-occluded environments. In this paper, we aim to explore the use of haptics, i.e., kinesthetic and tactile sensing, for pose estimation and in-hand manipulation with underactuated hands. Such haptic approach would mitigate occluded environments where line-of-sight is not always available. We put an emphasis on identifying the feature state representation of the system that does not include vision and can be obtained with simple and low-cost hardware. For tactile sensing, therefore, we propose a low-cost and flexible sensor that is mostly 3D printed along with the finger-tip and can provide implicit contact information. Taking a two-finger underactuated hand as a test-case, we analyze the contribution of kinesthetic and tactile features along with various regression models to the accuracy of the predictions. Furthermore, we propose a Model Predictive Control (MPC) approach which utilizes the pose estimation to manipulate objects to desired states solely based on haptics. We have conducted a series of experiments that validate the ability to estimate poses of various objects with different geometry, stiffness and texture, and show manipulation to goals in the workspace with relatively high accuracy

    Collaborative and Cooperative Robotics Applications using Visual Perception

    Get PDF
    The objective of this Thesis is to develop novel integrated strategies for collaborative and cooperative robotic applications. Commonly, industrial robots operate in structured environments and in work-cell separated from human operators. Nowadays, collaborative robots have the capacity of sharing the workspace and collaborate with humans or other robots to perform complex tasks. These robots often operate in an unstructured environment, whereby they need sensors and algorithms to get information about environment changes. Advanced vision and control techniques have been analyzed to evaluate their performance and their applicability to industrial tasks. Then, some selected techniques have been applied for the first time to an industrial context. A Peg-in-Hole task has been chosen as first case study, since it has been extensively studied but still remains challenging: it requires accuracy both in the determination of the hole poses and in the robot positioning. Two solutions have been developed and tested. Experimental results have been discussed to highlight the advantages and disadvantages of each technique. Grasping partially known objects in unstructured environments is one of the most challenging issues in robotics. It is a complex task and requires to address multiple subproblems, in order to be accomplished, including object localization and grasp pose detection. Also for this class of issues some vision techniques have been analyzed. One of these has been adapted to be used in industrial scenarios. Moreover, as a second case study, a robot-to-robot object handover task in a partially structured environment and in the absence of explicit communication between the robots has been developed and validated. Finally, the two case studies have been integrated in two real industrial setups to demonstrate the applicability of the strategies to solving industrial problems
    corecore