470 research outputs found

    Visual servoing with safe interaction using image moments

    Get PDF
    The problem of image based visual servoing for robots working in a cluttered dynamic environment is addressed in this paper. It is assumed that the environment is observed by depth sensors which allow to measure the distance between any moving obstacle and the robot. Also an eye-in-hand camera is used to extract image features. The main idea is to control suitable image moments and to relax a certain number of robot's degrees of freedom during the interaction phase. If an obstacle approaches the robot, the main visual servoing task is relaxed partially or completely, while the image features are kept in the camera field of view by controlling the image moments. Fuzzy rules are used to set the desired values of the image moments. Beside that, the relaxed redundancy of the robot is exploited to avoid collisions. After removing the risk of collision, the main visual servoing task is resumed. The effectiveness of the algorithm is shown by several case studies on a KUKA LWR 4 robot arm

    Motion Planning from Demonstrations and Polynomial Optimization for Visual Servoing Applications

    Get PDF
    Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self- and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and “whole arm” collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential “whole arm” collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.published_or_final_versio

    Robust auto tool change for industrial robots using visual servoing

    Full text link
    This is an Author's Accepted Manuscript of an article published in Muñoz-Benavent, Pau, Solanes Galbis, Juan Ernesto, Gracia Calandin, Luis Ignacio, Tornero Montserrat, Josep. (2019). Robust auto tool change for industrial robots using visual servoing.International Journal of Systems Science, 50, 2, 432-449. © Taylor & Francis, available online at: http://doi.org/10.1080/00207721.2018.1562129[EN] This work presents an automated solution for tool changing in industrial robots using visual servoing and sliding mode control. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by a vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritised level to satisfy the constraints typically present in a robot system: joint range limits, maximum joint speeds and allowed workspace. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. The feasibility and effectiveness of the proposed approach is substantiated by simulation results for a complex 3D case study. Moreover, real experimentation with a 6R industrial manipulator is also presented to demonstrate the applicability of the method for tool changing.This work was supported in part by the Ministerio de Economia, Industria y Competitividad, Gobierno de Espana under Grant BES-2010-038486 and Project DPI2017-87656-C2-1-R.Muñoz-Benavent, P.; Solanes Galbis, JE.; Gracia Calandin, LI.; Tornero Montserrat, J. (2019). Robust auto tool change for industrial robots using visual servoing. International Journal of Systems Science. 50(2):432-449. https://doi.org/10.1080/00207721.2018.1562129S43244950

    Generation of dynamic motion for anthropomorphic systems under prioritized equality and inequality constraints

    Get PDF
    In this paper, we propose a solution to compute full-dynamic motions for a humanoid robot, accounting for various kinds of constraints such as dynamic balance or joint limits. As a first step, we propose a unification of task-based control schemes, in inverse kinematics or inverse dynamics. Based on this unification, we generalize the cascade of quadratic programs that were developed for inverse kinematics only. Then, we apply the solution to generate, in simulation, wholebody motions for a humanoid robot in unilateral contact with the ground, while ensuring the dynamic balance on a non horizontal surface

    Air vehicle simulator: an application for a cable array robot

    Get PDF
    The development of autonomous air vehicles can be an expensive research pursuit. To alleviate some of the financial burden of this process, we have constructed a system consisting of four winches each attached to a central pod (the simulated air vehicle) via cables - a cable-array robot. The system is capable of precisely controlling the three dimensional position of the pod allowing effective testing of sensing and control strategies before experimentation on a free-flying vehicle. In this paper, we present a brief overview of the system and provide a practical control strategy for such a system. ©2005 IEEE

    Robot Localisation and 3D Position Estimation Using a Free-Moving Camera and Cascaded Convolutional Neural Networks

    Full text link
    Many works in collaborative robotics and human-robot interaction focuses on identifying and predicting human behaviour while considering the information about the robot itself as given. This can be the case when sensors and the robot are calibrated in relation to each other and often the reconfiguration of the system is not possible, or extra manual work is required. We present a deep learning based approach to remove the constraint of having the need for the robot and the vision sensor to be fixed and calibrated in relation to each other. The system learns the visual cues of the robot body and is able to localise it, as well as estimate the position of robot joints in 3D space by just using a 2D color image. The method uses a cascaded convolutional neural network, and we present the structure of the network, describe our own collected dataset, explain the network training and achieved results. A fully trained system shows promising results in providing an accurate mask of where the robot is located and a good estimate of its joints positions in 3D. The accuracy is not good enough for visual servoing applications yet, however, it can be sufficient for general safety and some collaborative tasks not requiring very high precision. The main benefit of our method is the possibility of the vision sensor to move freely. This allows it to be mounted on moving objects, for example, a body of the person or a mobile robot working in the same environment as the robots are operating in.Comment: Submission for IEEE AIM 2018 conference, 7 pages, 7 figures, ROBIN group, University of Osl

    Visual servoing path planning for cameras obeying the unified model

    Get PDF
    This paper proposes a path planning visual servoing strategy for a class of cameras that includes conventional perspective cameras, fisheye cameras and catadioptric cameras as special cases. Specifically, these cameras are modeled by adopting a unified model recently proposed in the literature and the strategy consists of designing image trajectories for eye-in-hand robotic systems that allow the robot to reach a desired location while satisfying typical visual servoing constraints. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible image trajectory through polynomial programming. Then, the computed image trajectory is tracked by using an image-based visual servoing controller. Experimental results with a fisheye camera mounted on a 6-d.o.f. robot arm are presented in order to illustrate the proposed strategy. © 2012 Copyright Taylor & Francis and The Robotics Society of Japan.postprin

    Uncalibrated visual servo for unmanned aerial manipulation

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The overactuation of the system is exploited by means of a hierarchical control law, which allows to prioritize several tasks during flight. We propose a safety-related primary task to avoid possible collisions. As a secondary task, we present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation by using a camera attached to it. In contrast to the previous visual servo approaches, a known value of camera focal length is not strictly required. To further improve flight behavior, we hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. The performance of the hierarchical control law, with and without activation of each of the tasks, is shown in simulations and in real experiments confirming the viability of such prioritized control scheme for aerial manipulation.Peer ReviewedPostprint (author's final draft

    Haptic-Based Shared-Control Methods for a Dual-Arm System

    Get PDF
    We propose novel haptic guidance methods for a dual-arm telerobotic manipulation system, which are able to deal with several different constraints, such as collisions, joint limits, and singularities. We combine the haptic guidance with shared-control algorithms for autonomous orientation control and collision avoidance meant to further simplify the execution of grasping tasks. The stability of the overall system in various control modalities is presented and analyzed via passivity arguments. In addition, a human subject study is carried out to assess the effectiveness and applicability of the proposed control approaches both in simulated and real scenarios. Results show that the proposed haptic-enabled shared-control methods significantly improve the performance of grasping tasks with respect to the use of classic teleoperation with neither haptic guidance nor shared control

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
    • …
    corecore