5 research outputs found

    Visual servoing path planning for cameras obeying the unified model

    Get PDF
    This paper proposes a path planning visual servoing strategy for a class of cameras that includes conventional perspective cameras, fisheye cameras and catadioptric cameras as special cases. Specifically, these cameras are modeled by adopting a unified model recently proposed in the literature and the strategy consists of designing image trajectories for eye-in-hand robotic systems that allow the robot to reach a desired location while satisfying typical visual servoing constraints. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible image trajectory through polynomial programming. Then, the computed image trajectory is tracked by using an image-based visual servoing controller. Experimental results with a fisheye camera mounted on a 6-d.o.f. robot arm are presented in order to illustrate the proposed strategy. © 2012 Copyright Taylor & Francis and The Robotics Society of Japan.postprin

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Hybrid Vision and Force Control in Robotic Manufacturing Systems

    Get PDF
    The ability to provide a physical interaction between an industrial robot and a workpiece in the environment is essential for a successful manipulation task. In this context, a wide range of operations such as deburring, pushing, and polishing are considered. The key factor to successfully accomplish such operations by a robot is to simultaneously control the position of the tool-tip of the end-effector and interaction force between the tool and the workpiece, which is a challenging task. This thesis aims to develop new reliable control strategies combining vision and force feedbacks to track a path on the workpiece while controlling the contacting force. In order to fulfill this task, the novel robust hybrid vision and force control approaches are presented for industrial robots subject to uncertainties and interacting with unknown workpieces. The main contributions of this thesis lie in several parts. In the first part of the thesis, a robust cascade vision and force approach is suggested to control industrial robots interacting with unknown workpieces considering model uncertainties. This cascade structure, consisting of an inner vision loop and an outer force loop, avoids the conflict between the force and vision control in traditional hybrid methods without decoupling force and vision systems. In the second part of the thesis, a novel image-based task-sequence/path planning scheme coupled with a robust vision and force control method for solving the multi-task operation problem of an eye-in-hand (EIH) industrial robot interacting with a workpiece is suggested. Each task is defined as tracking a predefined path or positioning to a single point on the workpiece’s surface with a desired interacting force signal, i.e., interaction with the workpiece. The proposed method suggests an optimal task sequence planning scheme to carry out all the tasks and an optimal path planning method to generate a collision-free path between the tasks, i.e., when the robot performs free-motion (pure vision control). In the third part of the project, a novel multi-stage method for robust hybrid vision and force control of industrial robots, subject to model uncertainties is proposed. It aims to improve the performance of the three phases of the control process: a) free-motion using the image-based visual servoing (IBVS) before the interaction with the workpiece; b) the moment that the end-effector touches the workpiece; and c) hybrid vision and force control during the interaction with the workpiece. In the fourth part of the thesis, a novel approach for hybrid vision and force control of eye-in-hand industrial robots is presented which addresses the problem of camera’s field-of-view (FOV) limitation. The merit of the proposed method is that it is capable of expanding the workpiece for eye-in-hand industrial robots to cope with the FOV limitation of the interaction tasks on the workpiece. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Commande référencée vision pour drones à décollages et atterrissages verticaux

    Get PDF
    La miniaturisation des calculateurs a permis le développement des drones, engins volants capable de se déplacer de façon autonome et de rendre des services, comme se rendre clans des lieux peu accessibles ou remplacer l'homme dans des missions pénibles. Un enjeu essentiel dans ce cadre est celui de l'information qu'ils doivent utiliser pour se déplacer, et donc des capteurs à exploiter pour obtenir cette information. Or nombre de ces capteurs présentent des inconvénients (risques de brouillage ou de masquage en particulier). L'utilisation d'une caméra vidéo dans ce contexte offre une perspective intéressante. L'objet de cette thèse était l'étude de l'utilisation d'une telle caméra dans un contexte capteur minimaliste: essentiellement l'utilisation des données visuelles et inertielles. Elle a porté sur le développement de lois de commande offrant au système ainsi bouclé des propriétés de stabilité et de robustesse. En particulier, une des difficultés majeures abordées vient de la connaissance très limitée de l'environnement dans lequel le drone évolue. La thèse a tout d'abord étudié le problème de stabilisation du drone sous l'hypothèse de petits déplacements (hypothèse de linéarité). Dans un second temps, on a montré comment relâcher l'hypothèse de petits déplacements via la synthèse de commandes non linéaires. Le cas du suivi de trajectoire a ensuite été considéré, en s'appuyant sur la définition d'un cadre générique de mesure d'erreur de position par rapport à un point de référence inconnu. Enfin, la validation expérimentale de ces résultats a été entamée pendant la thèse, et a permis de valider bon nombre d'étapes et de défis associés à leur mise en œuvre en conditions réelles. La thèse se conclut par des perspectives pour poursuivre les travaux.The computers miniaturization has paved the way for the conception of Unmanned Aerial vehicles - "UAVs"- that is: flying vehicles embedding computers to make them partially or fully automated for such missions as e.g. cluttered environments exploration or replacement of humanly piloted vehicles for hazardous or painful missions. A key challenge for the design of such vehicles is that of the information they need to find in order to move, and, thus, the sensors to be used in order to get such information. A number of such sensors have flaws (e.g. the risk of being jammed). In this context, the use of a videocamera offers interesting prospectives. The goal of this PhD work was to study the use of such a videocamera in a minimal sensors setting: essentially the use of visual and inertial data. The work has been focused on the development of control laws offering the closed loop system stability and robustness properties. In particular, one of the major difficulties we faced came from the limited knowledge of the UAV environment. First we have studied this question under a small displacements assumption (linearity assumption). A control law has been defined, which took performance criteria into account. Second, we have showed how the small displacements assumption could be given up through nonlinear control design. The case of a trajectory following has then been considered, with the use of a generic error vector modelling with respect to an unknown reference point. Finally, an experimental validation of this work has been started and helped validate a number of steps and challenges associated to real conditions experiments. The work was concluded with prospectives for future work.TOULOUSE-ISAE (315552318) / SudocSudocFranceF
    corecore