35 research outputs found

    Optimal Control of Image Based Visual Servoing (IBVS) for High Precision Visual Inspection Applications

    Get PDF
    Visual servoing is a control technique that uses image data as feedback in a motion control loop. This technique is useful in tasks that require robots or other automated motion systems to automatically inspect parts or structures in motion. One specific method of visual servoing is Image Based Visual Servoing (IBVS), a method that simply minimizes the differences between an observed image orientation and a desired one. This method works well for orientations where the differences are small, but in the case where the desired orientation is more difficult to reach, the system can become unstable, either driving to infinity through a phenomenon known as camera retreat or following non-optimal and non-repeatable trajectories. This work attempts to address camera retreat and other non-optimal paths by applying dynamic programming, an optimal control method that can determine an optimal trajectory by partitioning possible trajectories into multiple smaller trajectories. Using a cost function to penalize undesirable sub trajectories, the optimal overall trajectory can be determined and initiated. This work attempts to explore an optimized portioned approach using dynamic programming to address camera retreat. The motivation for this is to create a high precision visual servoing sequence suitable for high tolerance automated processes; specifically, quality inspection of airplane wire harnesses

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Visual Servoing

    Get PDF
    International audienceThis chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed , we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual ser-voing in robotics

    Robust Model Predictive Control for Linear Parameter Varying Systems along with Exploration of its Application in Medical Mobile Robots

    Get PDF
    This thesis seeks to develop a robust model predictive controller (MPC) for Linear Parameter Varying (LPV) systems. LPV models based on input-output display are employed. We aim to improve robust MPC methods for LPV systems with an input-output display. This improvement will be examined from two perspectives. First, the system must be stable in conditions of uncertainty (in signal scheduling or due to disturbance) and perform well in both tracking and regulation problems. Secondly, the proposed method should be practical, i.e., it should have a reasonable computational load and not be conservative. Firstly, an interpolation approach is utilized to minimize the conservativeness of the MPC. The controller is calculated as a linear combination of a set of offline predefined control laws. The coefficients of these offline controllers are derived from a real-time optimization problem. The control gains are determined to ensure stability and increase the terminal set. Secondly, in order to test the system's robustness to external disturbances, a free control move was added to the control law. Also, a Recurrent Neural Network (RNN) algorithm is applied for online optimization, showing that this optimization method has better speed and accuracy than traditional algorithms. The proposed controller was compared with two methods (robust MPC and MPC with LPV model based on input-output) in reference tracking and disturbance rejection scenarios. It was shown that the proposed method works well in both parts. However, two other methods could not deal with the disturbance. Thirdly, a support vector machine was introduced to identify the input-output LPV model to estimate the output. The estimated model was compared with the actual nonlinear system outputs, and the identification was shown to be effective. As a consequence, the controller can accurately follow the reference. Finally, an interpolation-based MPC with free control moves is implemented for a wheeled mobile robot in a hospital setting, where an RNN solves the online optimization problem. The controller was compared with a robust MPC and MPC-LPV in reference tracking, disturbance rejection, online computational load, and region of attraction. The results indicate that our proposed method surpasses and can navigate quickly and reliably while avoiding obstacles

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment

    High-Speed Vision and Force Feedback for Motion-Controlled Industrial Manipulators

    Get PDF
    Over the last decades, both force sensors and cameras have emerged as useful sensors for different applications in robotics. This thesis considers a number of dynamic visual tracking and control problems, as well as the integration of these techniques with contact force control. Different topics ranging from basic theory to system implementation and applications are treated. A new interface developed for external sensor control is presented, designed by making non-intrusive extensions to a standard industrial robot control system. The structure of these extensions are presented, the system properties are modeled and experimentally verified, and results from force-controlled stub grinding and deburring experiments are presented. A novel system for force-controlled drilling using a standard industrial robot is also demonstrated. The solution is based on the use of force feedback to control the contact forces and the sliding motions of the pressure foot, which would otherwise occur during the drilling phase. Basic methods for feature-based tracking and servoing are presented, together with an extension for constrained motion estimation based on a dual quaternion pose parametrization. A method for multi-camera real-time rigid body tracking with time constraints is also presented, based on an optimal selection of the measured features. The developed tracking methods are used as the basis for two different approaches to vision/force control, which are illustrated in experiments. Intensity-based techniques for tracking and vision-based control are also developed. A dynamic visual tracking technique based directly on the image intensity measurements is presented, together with new stability-based methods suitable for dynamic tracking and feedback problems. The stability-based methods outperform the previous methods in many situations, as shown in simulations and experiments

    Direct Visual Servoing for Grasping Using Depth Maps

    Get PDF
    Visual servoing is extremely helpful for many applications such as tracking objects, controlling the position of end-effectors, grasping and many others. It has been helpful in industrial sites, academic projects and research. Visual servoing is a very challenging task in robotics and research has been done in order to address and improve the methods used for servoing and the grasping application in particular. Our goal is to use visual servoing to control the end-effector of a robotic arm bringing it to a grasping position for the object of interest. Gaining knowledge about depth was always a major challenge for visual servoing, yet necessary. Depth knowledge was either assumed to be available from a 3D model or was estimated using stereo vision or other methods. This process is computationally expensive and the results might be inaccurate because of its sensitivity to environmental conditions. Depth map usage has been recently more commonly used by researchers as it is an easy, fast and cheap way to capture depth information. This solved the problems faced estimating the 3-D information needed but the developed algorithms were only successful starting from small initial errors. An effective position controller capable of reaching the target location starting from large initial errors is needed. The thesis presented here uses Kinect depth maps to directly control a robotic arm to reach a determined grasping location specified by a target image. The algorithm consists of a 2-phase controller; the first phase is a feature based approach that provides a coarse alignment with the target image resulting in relatively small errors. The second phase is a depth map error minimization based control. The second-phase controller minimizes the difference in depth maps between the current and target images. This controller allows the system to achieve minimal steady state errors in translation and rotation starting from a relatively small initial error. To test the system's effectiveness, several experiments were conducted. The experimental setup consists of the Barrett WAM robotic arm with a Microsoft Kinect camera mounted on it in an eye-in-hand configuration. A defined goal scene taken from the grasping position is inputted to the system whose controller drives it to the target position starting from any initial condition. Our system outperforms previous work which tackled this subject. It functions successfully even with large initial errors. This successful operation is achieved by preceding the main control algorithm with a coarse image alignment achieved via a feature based control. Automating the system further by automatically detecting the best grasping position and making that location the robot's target would be a logical extension to improve and complete this work

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Remote minimally invasive surgery - haptic feedback and selective automation in medical robotics

    Get PDF
    Abstract. The automation of recurrent tasks and force feedback are complex problems in medical robotics. We present a novel approach that extends human-machine skill-transfer by a scaffolding framework. It assumes a consolidated working environment for both, the trainee and the trainer. The trainer provides hints and cues in a basic structure which is already understood by the learner. In this work, the scaffolding is constituted by abstract patterns, which facilitate the structuring and segmentation of information during "Learning by Demonstration" (LbD). With this concept, the concrete example of knot-tying for suturing is exemplified and evaluated. During the evaluation, most problems and failures arose due to intrinsic system imprecisions of the medical robot system. These inaccuracies were then improved by the visual guidance of the surgical instruments. While the benefits of force feedback in telesurgery has already been demonstrated and measured forces are also used during task learning, the transmission of signals between the operator console and the robot system over long-distances or across-network remote connections is still a challenge due to time-delay. Especially during incision processes with a scalpel into tissue, a delayed force feedback yields to an unpredictable force perception at the operator-side and can harm the tissue which the robot is interacting with. We propose a XFEM-based incision force prediction algorithm that simulates the incision contact-forces in real-time and compensates the delayed force sensor readings. A realistic 4-arm system for minimally invasive robotic heart surgery is used as a platform for the research
    corecore