181 research outputs found

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Brain–Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot

    Get PDF
    This paper presents a teleoperation control for an exoskeleton robotic system based on the brain-machine interface and vision feedback. Vision compressive sensing, brain-machine reference commands, and adaptive fuzzy controllers in joint-space have been effectively integrated to enable the robot performing manipulation tasks guided by human operator's mind. First, a visual-feedback link is implemented by a video captured by a camera, allowing him/her to visualize the manipulator's workspace and movements being executed. Then, the compressed images are used as feedback errors in a nonvector space for producing steady-state visual evoked potentials electroencephalography (EEG) signals, and it requires no prior information on features in contrast to the traditional visual servoing. The proposed EEG decoding algorithm generates control signals for the exoskeleton robot using features extracted from neural activity. Considering coupled dynamics and actuator input constraints during the robot manipulation, a local adaptive fuzzy controller has been designed to drive the exoskeleton tracking the intended trajectories in human operator's mind and to provide a convenient way of dynamics compensation with minimal knowledge of the dynamics parameters of the exoskeleton robot. Extensive experiment studies employing three subjects have been performed to verify the validity of the proposed method

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    I-AUV Docking and Panel Intervention at Sea

    Get PDF
    The use of commercially available autonomous underwater vehicles (AUVs) has increased during the last fifteen years. While they are mainly used for routine survey missions, there is a set of applications that nowadays can be only addressed by manned submersibles or work-class remotely operated vehicles (ROVs) equipped with teleoperated arms: the intervention applications. To allow these heavy vehicles controlled by human operators to perform intervention tasks, underwater structures like observatory facilities, subsea panels or oil-well Christmas trees have been adapted, making them more robust and easier to operate. The TRITON Spanish founded project proposes the use of a light-weight intervention AUV (I-AUV) to carry out intervention applications simplifying the adaptation of these underwater structures and drastically reducing the operational cost. To prove this concept, the Girona 500 I-AUV is used to autonomously dock into an adapted subsea panel and once docked perform an intervention composed of turning a valve and plugging in/unplugging a connector. The techniques used for the autonomous docking and manipulation as well as the design of an adapted subsea panel with a funnel-based docking system are presented in this article together with the results achieved in a water tank and at sea.This work was supported by the Spanish project DPI2014-57746-C3 (MERBOTS Project) and by Generalitat Valenciana under Grant GVA-PROMETEO/2016/066. The University of Girona wants to thank the SARTI group for their collaboration with the TRITON project

    Robust Position-based Visual Servoing of Industrial Robots

    Get PDF
    Recently, the researchers have tried to use dynamic pose correction methods to improve the accuracy of industrial robots. The application of dynamic path tracking aims at adjusting the end-effector’s pose by using a photogrammetry sensor and eye-to-hand PBVS scheme. In this study, the research aims to enhance the accuracy of industrial robot by designing a chattering-free digital sliding mode controller integrated with a novel adaptive robust Kalman filter (ARKF) validated on Puma 560 model on simulation. This study includes Gaussian noise generation, pose estimation, design of adaptive robust Kalman filter, and design of chattering-free sliding mode controller. The designed control strategy has been validated and compared with other control strategies in Matlab 2018a Simulink on a 64bits PC computer. The main contributions of the research work are summarized as follows. First, the noise removal in the pose estimation is carried out by the novel ARKF. The proposed ARKF deals with experimental noise generated from photogrammetry observation sensor C-track 780. It exploits the advantages of adaptive estimation method for states noise covariance (Q), least square identification for measurement noise covariance (R) and a robust mechanism for state variables error covariance (P). The Gaussian noise generation is based on the collected data from the C-track when the robot is in a stationary status. A novel method for estimating covariance matrix R considering both effects of the velocity and pose is suggested. Next, a robust PBVS approach for industrial robots based on fast discrete sliding mode controller (FDSMC) and ARKF is proposed. The FDSMC takes advantage of a nonlinear reaching law which results in faster and more accurate trajectory tracking compared to standard DSMC. Substituting the switching function with a continuous nonlinear reaching law leads to a continuous output and thus eliminating the chattering. Additionally, the sliding surface dynamics is considered to be a nonlinear one, which results in increasing the convergence speed and accuracy. Finally, the analysis techniques related to various types of sliding mode controller have been used for comparison. Also, the kinematic and dynamic models with revolutionary joints for Puma 560 are built for simulation validation. Based on the computed indicators results, it is proven that after tuning the parameters of designed controller, the chattering-free FDSMC integrated with ARKF can essentially reduce the effect of uncertainties on robot dynamic model and improve the tracking accuracy of the 6 degree-of-freedom (DOF) robot

    Risk-aware Path and Motion Planning for a Tethered Aerial Visual Assistant in Unstructured or Confined Environments

    Get PDF
    This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two differentComment: Ph.D Dissertatio
    • …
    corecore