42 research outputs found

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Visual servoing of aerial manipulators

    Get PDF
    The final publication is available at link.springer.comThis chapter describes the classical techniques to control an aerial manipulator by means of visual information and presents an uncalibrated image-based visual servo method to drive the aerial vehicle. The proposed technique has the advantage that it contains mild assumptions about the principal point and skew values of the camera, and it does not require prior knowledge of the focal length, in contrast to traditional image-based approaches.Peer ReviewedPostprint (author's final draft

    Hierarchical task control for aerial inspection

    Get PDF
    Trabajo presentado al Workshop and Summer School on Field Robotics (euRathlon/ARCAS), celebrado en Sevilla (España) del 15 al 18 de junio de 2014.This paper presents a task oriented control strategy for aerial vehicles equipped with a robotic arm and a camera attached to its end-effector. With this setting the camera can reach a new set of orientations previously not feasible for the quadrotor. The over-actuation of the whole system is exploited with a hierarchical control law to achieve a primary task consisting on a visual servoing control, whilst secondary tasks can also be attained to minimize gravitational effects or undesired arm configurations. Results are shown in a Robot Operating System (ROS) simulation.This work has been partially funded by the EU project ARCAS FP7-287617.Peer Reviewe

    Uncalibrated image-based visual servoing

    Get PDF
    This paper develops a new method for uncalibrated image-based visual servoing. In contrast to traditional image-based visual servo, the proposed solution does not require a known value of camera focal length for the computation of the image Jacobian. Instead, it is estimated at run time from the observation of the tracked target. The technique is shown to outperform classical visual servoing schemes in situations with noisy calibration parameters and for unexpected changes in the camera zoom. The method’s performance is demonstrated both in simulation experiments and in a ROS implementation of a quadrotor servoing task. The developed solution is tightly integrated with ROS and is made available as part of the IRI ROS stack.Peer ReviewedPostprint (author draft version

    Hierarchical task control for aerial inspection

    Get PDF
    This paper presents a task oriented control strategy for aerial vehicles equipped with a robotic arm and a camera attached to its end-effector. With this setting the camera can reach a new set of orientations previously not feasible for the quadrotor. The over-actuation of the whole system is exploited with a hierarchical control law to achieve a primary task consisting on a visual servoing control, whilst secondary tasks can also be attained to minimize gravitational effects or undesired arm configurations. Results are shown in a Robot Operating System (ROS) simulation.Peer ReviewedPostprint (author’s final draft

    Uncalibrated image-based visual servoing

    Get PDF
    This paper develops a new method for uncalibrated image-based visual servoing. In contrast to traditional image-based visual servo, the proposed solution does not require a known value of camera focal length for the computation of the image Jacobian. Instead, it is estimated at run time from the observation of the tracked target. The technique is shown to outperform classical visual servoing schemes in situations with noisy calibration parameters and for unexpected changes in the camera zoom. The method’s performance is demonstrated both in simulation experiments and in a ROS implementation of a quadrotor servoing task. The developed solution is tightly integrated with ROS and is made available as part of the IRI ROS stack.Peer ReviewedPostprint (author draft version

    Task priority control for aerial manipulation

    Get PDF
    Trabajo presentado al IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR) celebrado en Hokkaido (Japón) del 27 al 30 de octubre de 2014.This paper presents a task oriented control strategy for aerial vehicles equipped with a manipulator. A camera is attached to the end-effector of the manipulator to perform a primary task consisting on visual servoing towards a desired target. Over-actuation of the whole quadrotor-arm system is exploited to achieve secondary velocity tasks. One subtask is proposed to horizontally stabilize the platform during flight by aligning the arm center of gravity with the quadrotor gravitational vector. The arm singularities and manipulability are addressed by another subtask that leads the arm to a preferable configuration, and also takes into account the arm joint limits. The performance of the whole visual servo and secondary tasks control scheme is shown in a Robot Operating System (ROS) implementation.This work has been partially funded by the EU project ARCAS FP7-ICT-287617.Peer Reviewe

    Planar PØP: feature-less pose estimation with applications in UAV localization

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) approaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar PØP. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.Peer ReviewedPostprint (author's final draft

    A flexible hardware-in-the-loop architecture for UAVs

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As robotic technology matures, fully autonomous robots become a realistic possibility, but demand very complex solutions to be rapidly engineered. In order to be able to quickly set up a working autonomous system, and to reduce the gap between simulated and real experiments, we propose a modular, upgradeable and flexible hardware-in-the-loop (HIL) architecture, which hybridizes the simulated and real settings. We take as use case the autonomous exploration of dense forests with UAVs, with the aim of creating useful maps for forest inspection, cataloging, or to compute other metrics such as total wood volume. As the first step in the development of the full system, in this paper we implement a fraction of this architecture, comprising assisted localization, and automatic methods for mapping, planning and motion execution. Specifically we are able to simulate the use of a 3D LIDAR endowed below an actual UAV autonomously navigating among simulated obstacles, thus the platform safety is not compromised. The full system is modular and takes profit of pieces either publicly available or easily programmed. We highlight the flexibility of the proposed HIL architecture to rapidly configure different experimental setups with a UAV in challenging terrain. Moreover, it can be extended to other robotic fields without further design. The HIL system uses the multi-platform ROS capabilities and only needs a motion capture system as external extra hardware, which is becoming standard equipment in all research labs dealing with mobile robots.Peer ReviewedPostprint (author's final draft

    Hybrid visual servoing with hierarchical task composition for aerial manipulation

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e. for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.Peer ReviewedPostprint (author's final draft
    corecore