104 research outputs found

    Image based visual servoing using bitangent points applied to planar shape alignment

    Get PDF
    We present visual servoing strategies based on bitangents for aligning planar shapes. In order to acquire bitangents we use convex-hull of a curve. Bitangent points are employed in the construction of a feature vector to be used in visual control. Experimental results obtained on a 7 DOF Mitsubishi PA10 robot, verifies the proposed method

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Adaptive visual servoing in uncalibrated environments.

    Get PDF
    Wang Hesheng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 70-73).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivContents --- p.vList of Figures --- p.viiList of Tables --- p.viiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Visual Servoing --- p.1Chapter 1.1.1 --- Position-based Visual Servoing --- p.4Chapter 1.1.2 --- Image-based Visual Servoing --- p.5Chapter 1.1.3 --- Camera Configurations --- p.7Chapter 1.2 --- Problem Definitions --- p.10Chapter 1.3 --- Related Work --- p.11Chapter 1.4 --- Contribution of This Work --- p.15Chapter 1.5 --- Organization of This Thesis --- p.16Chapter 2 --- System Modeling --- p.18Chapter 2.1 --- The Coordinates Frames --- p.18Chapter 2.2 --- The System Kinematics --- p.20Chapter 2.3 --- The System Dynamics --- p.21Chapter 2.4 --- The Camera Model --- p.23Chapter 2.4.1 --- Eye-in-hand System --- p.28Chapter 2.4.2 --- Eye-and-hand System --- p.32Chapter 3 --- Adaptive Image-based Visual Servoing --- p.35Chapter 3.1 --- Controller Design --- p.35Chapter 3.2 --- Estimation of The Parameters --- p.38Chapter 3.3 --- Stability Analysis --- p.42Chapter 4 --- Simulation --- p.48Chapter 4.1 --- Simulation I --- p.49Chapter 4.2 --- Simulation II --- p.51Chapter 5 --- Experiments --- p.55Chapter 6 --- Conclusions --- p.63Chapter 6.1 --- Conclusions --- p.63Chapter 6.2 --- Feature Work --- p.64Appendix --- p.66Bibliography --- p.7

    Experimental study on visual servo control of robots.

    Get PDF
    Lam Kin Kwan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 67-70).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.1Chapter 1.1 --- Visual Servoing --- p.1Chapter 1.1.1 --- System Architectures --- p.2Chapter 1.1.1.1 --- Position-based Visual Servoing --- p.2Chapter 1.1.1.2 --- Image-based Visual Servoing --- p.3Chapter 1.1.2 --- Camera Configurations --- p.4Chapter 1.2 --- Problem Definition --- p.5Chapter 1.3 --- Related Work --- p.6Chapter 1.4 --- Contribution of This Work --- p.9Chapter 1.5 --- Organization of This Thesis --- p.10Chapter 2. --- System Modeling --- p.11Chapter 2.1 --- Coordinate Frames --- p.11Chapter 2.2 --- System Kinematics --- p.13Chapter 2.3 --- System Dynamics --- p.14Chapter 2.4 --- Camera Model --- p.15Chapter 2.4.1 --- Eye-in-hand Configuration --- p.18Chapter 2.4.2 --- Eye-and-hand Configuration --- p.21Chapter 3. --- Adaptive Visual Servoing Control --- p.24Chapter 3.1 --- Controller Design --- p.24Chapter 3.2 --- Parameter Estimation --- p.27Chapter 3.3 --- Stability Analysis --- p.30Chapter 4. --- Experimental Studies --- p.34Chapter 4.1 --- Experimental Setup --- p.34Chapter 4.1.1 --- Hardware Setup --- p.34Chapter 4.1.2 --- Image Pattern Recognition --- p.35Chapter 4.1.3 --- Experimental Task --- p.36Chapter 4.2 --- Control Performance with Different Proportional Gains and Derivative Gains --- p.39Chapter 4.3 --- Control Performance with Different Adaptive Gains --- p.41Chapter 4.4 --- Gravity Compensator --- p.50Chapter 4.5 --- Control Performance with Previous Image Positions --- p.51Chapter 4.6 --- Kinematic Controller --- p.56Chapter 5. --- Conclusions --- p.61Chapter 5.1 --- Conclusions --- p.61Chapter 5.2 --- Future Work --- p.62Appendix --- p.63Bibliography --- p.6

    Flexible Force-Vision Control for Surface Following using Multiple Cameras

    Get PDF
    A flexible method for six-degree-of-freedom combined vision/force control for interaction with a stiff uncalibrated environment is presented. An edge-based rigidbody tracker is used in an observer-based controller, and combined with a six-degree-of-freedom force- or impedance controller. The effect of error sources such as image space measurement noise and calibration errors are considered. Finally, the method is validated in simulations and a surface following experiment using an industrial robot
    • …
    corecore