243 research outputs found

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Robot Visual Servoing Using Discontinuous Control

    Full text link
    This work presents different proposals to deal with common problems in robot visual servoing based on the application of discontinuous control methods. The feasibility and effectiveness of the proposed approaches are substantiated by simulation results and real experiments using a 6R industrial manipulator. The main contributions are: - Geometric invariance using sliding mode control (Chapter 3): the defined higher-order invariance is used by the proposed approaches to tackle problems in visual servoing. Proofs of invariance condition are presented. - Fulfillment of constraints in visual servoing (Chapter 4): the proposal uses sliding mode methods to satisfy mechanical and visual constraints in visual servoing, while a secondary task is considered to properly track the target object. The main advantages of the proposed approach are: low computational cost, robustness and fully utilization of the allowed space for the constraints. - Robust auto tool change for industrial robots using visual servoing (Chapter 4): visual servoing and the proposed method for constraints fulfillment are applied to an automated solution for tool changing in industrial robots. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by the vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritized level to satisfy the aforementioned constraints. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. - Sliding mode controller for reference tracking (Chapter 5): an approach based on sliding mode control is proposed for reference tracking in robot visual servoing using industrial robot manipulators. The novelty of the proposal is the introduction of a sliding mode controller that uses a high-order discontinuous control signal, i.e., joint accelerations or joint jerks, in order to obtain a smoother behavior and ensure the robot system stability, which is demonstrated with a theoretical proof. - PWM and PFM for visual servoing in fully decoupled approaches (Chapter 6): discontinuous control based on pulse width and pulse frequency modulation is proposed for fully decoupled position based visual servoing approaches, in order to get the same convergence time for camera translation and rotation. Moreover, other results obtained in visual servoing applications are also described.Este trabajo presenta diferentes propuestas para tratar problemas habituales en el control de robots por realimentación visual, basadas en la aplicación de métodos de control discontinuos. La viabilidad y eficacia de las propuestas se fundamenta con resultados en simulación y con experimentos reales utilizando un robot manipulador industrial 6R. Las principales contribuciones son: - Invariancia geométrica utilizando control en modo deslizante (Capítulo 3): la invariancia de alto orden definida aquí es utilizada después por los métodos propuestos, para tratar problemas en control por realimentación visual. Se apuertan pruebas teóricas de la condición de invariancia. - Cumplimiento de restricciones en control por realimentación visual (Capítulo 4): esta propuesta utiliza métodos de control en modo deslizante para satisfacer restricciones mecánicas y visuales en control por realimentación visual, mientras una tarea secundaria se encarga del seguimiento del objeto. Las principales ventajas de la propuesta son: bajo coste computacional, robustez y plena utilización del espacio disponible para las restricciones. - Cambio de herramienta robusto para un robot industrial mediante control por realimentación visual (Capítulo 4): el control por realimentación visual y el método propuesto para el cumplimiento de las restricciones se aplican a una solución automatizada para el cambio de herramienta en robots industriales. La robustez de la propuesta radica en el uso del control por realimentación visual, que utiliza información del sistema de visión para cerrar el lazo de control. Además, el control en modo deslizante se utiliza simultáneamente en un nivel de prioridad superior para satisfacer las restricciones. Así pues, el control es capaz de dejar la herramienta en el intercambiador de herramientas de forma precisa, a la par que satisface las restricciones del robot. - Controlador en modo deslizante para seguimiento de referencia (Capítulo 5): se propone un enfoque basado en el control en modo deslizante para seguimiento de referencia en robots manipuladores industriales controlados por realimentación visual. La novedad de la propuesta radica en la introducción de un controlador en modo deslizante que utiliza la señal de control discontinua de alto orden, i.e. aceleraciones o jerks de las articulaciones, para obtener un comportamiento más suave y asegurar la estabilidad del sistema robótico, lo que se demuestra con una prueba teórica. - Control por realimentación visual mediante PWM y PFM en métodos completamente desacoplados (Capítulo 6): se propone un control discontinuo basado en modulación del ancho y frecuencia del pulso para métodos completamente desacoplados de control por realimentación visual basados en posición, con el objetivo de conseguir el mismo tiempo de convergencia para los movimientos de rotación y traslación de la cámara . Además, se presentan también otros resultados obtenidos en aplicaciones de control por realimentación visual.Aquest treball presenta diferents propostes per a tractar problemes habituals en el control de robots per realimentació visual, basades en l'aplicació de mètodes de control discontinus. La viabilitat i eficàcia de les propostes es fonamenta amb resultats en simulació i amb experiments reals utilitzant un robot manipulador industrial 6R. Les principals contribucions són: - Invariància geomètrica utilitzant control en mode lliscant (Capítol 3): la invariància d'alt ordre definida ací és utilitzada després pels mètodes proposats, per a tractar problemes en control per realimentació visual. S'aporten proves teòriques de la condició d'invariància. - Compliment de restriccions en control per realimentació visual (Capítol 4): aquesta proposta utilitza mètodes de control en mode lliscant per a satisfer restriccions mecàniques i visuals en control per realimentació visual, mentre una tasca secundària s'encarrega del seguiment de l'objecte. Els principals avantatges de la proposta són: baix cost computacional, robustesa i plena utilització de l'espai disponible per a les restriccions. - Canvi de ferramenta robust per a un robot industrial mitjançant control per realimentació visual (Capítol 4): el control per realimentació visual i el mètode proposat per al compliment de les restriccions s'apliquen a una solució automatitzada per al canvi de ferramenta en robots industrials. La robustesa de la proposta radica en l'ús del control per realimentació visual, que utilitza informació del sistema de visió per a tancar el llaç de control. A més, el control en mode lliscant s'utilitza simultàniament en un nivell de prioritat superior per a satisfer les restriccions. Així doncs, el control és capaç de deixar la ferramenta en l'intercanviador de ferramentes de forma precisa, a la vegada que satisfà les restriccions del robot. - Controlador en mode lliscant per a seguiment de referència (Capítol 5): es proposa un enfocament basat en el control en mode lliscant per a seguiment de referència en robots manipuladors industrials controlats per realimentació visual. La novetat de la proposta radica en la introducció d'un controlador en mode lliscant que utilitza senyal de control discontínua d'alt ordre, i.e. acceleracions o jerks de les articulacions, per a obtindre un comportament més suau i assegurar l'estabilitat del sistema robòtic, la qual cosa es demostra amb una prova teòrica. - Control per realimentació visual mitjançant PWM i PFM en mètodes completament desacoblats (Capítol 6): es proposa un control discontinu basat en modulació de l'ample i la freqüència del pols per a mètodes completament desacoblats de control per realimentació visual basats en posició, amb l'objectiu d'aconseguir el mateix temps de convergència per als moviments de rotació i translació de la càmera. A més, es presenten també altres resultats obtinguts en aplicacions de control per realimentació visual.Muñoz Benavent, P. (2017). Robot Visual Servoing Using Discontinuous Control [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90430TESI

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Image space trajectory tracking of 6-DOF robot manipulator in assisting visual servoing

    Get PDF
    As vision is a versatile sensor, vision-based control of robot is becoming more important in industrial applications. The control signal generated using the traditional control algorithms leads to undesirable movement of the end-effector during the positioning task. This movement may sometimes cause task failure due to visibility loss. In this paper, a sliding mode controller (SMC) is designed to track 2D image features in an image-based visual servoing task. The feature trajectory tracking helps to keep the image features always in the camera field of view and thereby ensures the shortest trajectory of the end-effector. SMC is the right choice to handle the depth uncertainties associated with translational motion. Stability of the closed-loop system with the proposed controller is proved by the Lyapunov method. Three feature trajectories are generated to test the efficacy of the proposed method. Simulation tests are conducted and the superiority of the proposed method over a Proportional Derivative – Sliding Mode Controller (PD-SMC) in terms of settling time and distance travelled by the end-effector is established in the presence and absence of depth uncertainties. The proposed controller is also tested in real-time by integrating the visual servoing system with a 6-DOF industrial robot manipulator, ABB IRB 1200

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Hybrid Vision and Force Control in Robotic Manufacturing Systems

    Get PDF
    The ability to provide a physical interaction between an industrial robot and a workpiece in the environment is essential for a successful manipulation task. In this context, a wide range of operations such as deburring, pushing, and polishing are considered. The key factor to successfully accomplish such operations by a robot is to simultaneously control the position of the tool-tip of the end-effector and interaction force between the tool and the workpiece, which is a challenging task. This thesis aims to develop new reliable control strategies combining vision and force feedbacks to track a path on the workpiece while controlling the contacting force. In order to fulfill this task, the novel robust hybrid vision and force control approaches are presented for industrial robots subject to uncertainties and interacting with unknown workpieces. The main contributions of this thesis lie in several parts. In the first part of the thesis, a robust cascade vision and force approach is suggested to control industrial robots interacting with unknown workpieces considering model uncertainties. This cascade structure, consisting of an inner vision loop and an outer force loop, avoids the conflict between the force and vision control in traditional hybrid methods without decoupling force and vision systems. In the second part of the thesis, a novel image-based task-sequence/path planning scheme coupled with a robust vision and force control method for solving the multi-task operation problem of an eye-in-hand (EIH) industrial robot interacting with a workpiece is suggested. Each task is defined as tracking a predefined path or positioning to a single point on the workpiece’s surface with a desired interacting force signal, i.e., interaction with the workpiece. The proposed method suggests an optimal task sequence planning scheme to carry out all the tasks and an optimal path planning method to generate a collision-free path between the tasks, i.e., when the robot performs free-motion (pure vision control). In the third part of the project, a novel multi-stage method for robust hybrid vision and force control of industrial robots, subject to model uncertainties is proposed. It aims to improve the performance of the three phases of the control process: a) free-motion using the image-based visual servoing (IBVS) before the interaction with the workpiece; b) the moment that the end-effector touches the workpiece; and c) hybrid vision and force control during the interaction with the workpiece. In the fourth part of the thesis, a novel approach for hybrid vision and force control of eye-in-hand industrial robots is presented which addresses the problem of camera’s field-of-view (FOV) limitation. The merit of the proposed method is that it is capable of expanding the workpiece for eye-in-hand industrial robots to cope with the FOV limitation of the interaction tasks on the workpiece. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration
    corecore