8 research outputs found

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Image-based visual servoing using improved image moments in 6-DOF robot systems

    Get PDF
    Visual servoing has played an important role in automated robotic manufacturing systems. This thesis will focus on this issue and proposes an improved method which includes an ameliorative image pre-processing (IP) algorithm and an amendatory IBVS algorithm As the first contribution, an improved IP algorithm based on the morphological theory has been discussed for the purpose of removing the unexpected speckles and balancing the illumination during the image processing. After this enhancing process, the useful information in the image becomes prominent and can be utilized to extract the accurate image features. Then, an improved IBVS algorithm is therefore further introduced for an eye-in-hand system as the second contribution. This eye-in-hand system includes a 6 Degree of Freedom (DOF) robot and a camera. The improved IBVS algorithm utilizes the image moment as the image features instead of detecting the special points for feature extraction in traditional IBVS. Comparing with traditional IBVS, choosing image moment as the image features can increase the stability of the system and extend the applied range of objects. The obtained image features will then be used to generate the control signals for the robot to track the target object. The Jacobian matrix describing the relationship between the motion of camera and velocity of image features is also discussed, where a new simple method has been proposed for the estimation of depth involved in the Jacobian matrix. In order to decouple the obtained Jacobian matrix for controlling the motion of camera with individual image features, a four stages sequence control has also been introduced to improve the control performance

    Robust and Multi-Objective Model Predictive Control Design for Nonlinear Systems

    Get PDF
    The multi-objective trade-off paradigm has become a very valuable design tool in engineering problems that have conflicting objectives. Recently, many control designers have worked on the design methods which satisfy multiple design specifications called multi-objective control design. However,the main challenge posed for the MPC design lies in the high computation load preventing its application to the fast dynamic system control in real-time. To meet this challenge, this thesis has proposed several methods covering nonlinear system modeling, on-line MPC design and multi-objective optimization. First, the thesis has proposed a robust MPC to control the shimmy vibration of the landing gear with probabilistic uncertainty. Then, an on-line MPC method has been proposed for image-based visual servoing control of a 6 DOF Denso robot. Finally, a multi-objective MPC has been introduced to allow the designers consider multiple objectives in MPC design. In this thesis, Tensor Product (TP) model transformation as a powerful tool in the modeling of the complex nonlinear systems is used to find the linear parameter-varying (LPV) models of the nonlinear systems. Higher-order singular value decomposition (HOSVD) technique is used to obtain a minimal order of the model tensor. Furthermore, to design a robust MPC for nonlinear systems in the presence of uncertainties which degrades the system performance and can lead to instability, we consider the parameters of the nonlinear systems with probabilistic uncertainties in the modeling using TP transformation. In this thesis, a computationally efficient methods for MPC design of image-based visual servoing, i.e. a fast dynamic system has been proposed. The controller is designed considering the robotic visual servoing system's input and output constraints, such as robot physical limitations and visibility constraints. The main contributions of this thesis are: (i) design MPC for nonlinear systems with probabilistic uncertainties that guarantees robust stability and performance of the systems; (ii) develop a real-time MPC method for a fast dynamical system; (iii) to propose a new multi-objective MPC for nonlinear systems using game theory. A diverse range of systems with nonlinearities and uncertainties including landing gear system, 6 DOF Denso robot are studied in this thesis. The simulation and real-time experimental results are presented and discussed in this thesis to verify the effectiveness of the proposed methods

    Image Based Visual Servoing Using Trajectory Planning and Augmented Visual Servoing Controller

    Get PDF
    Robots and automation manufacturing machineries have become an inseparable part of industry, nowadays. However, robotic systems are generally limited to operate in highly structured environments. Although, sensors such as laser tracker, indoor GPS, 3D metrology and tracking systems are used for positioning and tracking in manufacturing and assembly tasks, these devices are highly limited to the working environment and the speed of operation and they are generally very expensive. Thus, integration of vision sensors with robotic systems and generally visual servoing system allows the robots to work in unstructured spaces, by producing non-contact measurements of the working area. However, projecting a 3D space into a 2D space, which happens in the camera, causes the loss of one dimension data. This initiates the challenges in vision based control. Moreover, the nonlinearities and complex structure of a manipulator robot make the problem more challenging. This project aims to develop new reliable visual servoing methods that allow its use in real robotic tasks. The main contributions of this project are in two parts; the visual servoing controller and trajectory planning algorithm. In the first part of the project, a new image based visual servoing controller called Augmented Image Based Visual Servoing (AIBVS) is presented. A proportional derivative (PD) controller is developed to generate acceleration as the controlling command of the robot. The stability analysis of the controller is conducted using Lyapanov theory. The developed controller has been tested on a 6 DOF Denso robot. The experimental results on point features and image moment features demonstrate the performance of the proposed AIBVS. Experimental results show that a damped response could be achieved using a PD controller with acceleration output. Moreover, smoother feature and robot trajectories are observed compared to those in conventional IBVS controllers. Later on, this controller is used on a moving object catching process. Visual servoing controllers have shown difficulty in stabilizing the system in global space. Hence, in the second part of the project, a trajectory planning algorithm is developed to achieve the global stability of the system. The trajectory planning is carried out by parameterizing the camera's velocity screw. The camera's velocity screw is parameterized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile guides the robot to its desired position. This is done by minimizing the error between the initial and desired features. This method provides a reliable path for the robot considering all robotic constraints. The developed algorithm is tested on a Denso robot. The results show that the trajectory planning algorithm is able to perform visual servoing tasks which are unstable when performed using visual servoing controllers

    Visual Servoing For Robotic Positioning And Tracking Systems

    Get PDF
    Visual servoing is a robot control method in which camera sensors are used inside the control loop and visual feedback is introduced into the robot control loop to enhance the robot control performance in accomplishing tasks in unstructured environments. In general, visual servoing can be categorized into image-based visual servoing (IBVS), position-based visual servoing (PBVS), and hybrid approach. To improve the performance and robustness of visual servoing systems, the research on IBVS for robotic positioning and tracking systems mainly focuses on aspects of camera configuration, image features, pose estimation, and depth determination. In the first part of this research, two novel multiple camera configurations of visual servoing systems are proposed for robotic manufacturing systems for positioning large-scale workpieces. The main advantage of these two multiple camera configurations is that the depths of target objects or target features are constant or can be determined precisely by using computer vision. Hence the accuracy of the interaction matrix is guaranteed, and thus the positioning performances of visual servoing systems can be improved remarkably. The simulation results show that the proposed multiple camera configurations of visual servoing for large-scale manufacturing systems can satisfy the demand of high-precision positioning and assembly in the aerospace industry. In the second part of this research, two improved image features for planar central symmetrical-shaped objects are proposed based on image moment invariants, which can represent the pose of target objects with respect to camera frame. A visual servoing controller based on the proposed image moment features is designed and thus the control performance of the robotic tracking system is improved compared with the method based on the commonly used image moment features. Experimental results on a 6-DOF robot visual servoing system demonstrate the efficiency of the proposed method. Lastly, to address the challenge of choosing proper image features for planar objects to get maximal decoupled structure of the interaction matrix, the neural network (NN) is applied as the estimator of target object poses with respect to camera frame based on the image moment invariants. Compared with previous methods, this scheme avoids image interaction matrix singularity and image local minima in IBVS. Furthermore, the analytical form of depth computation is given by using classical geometrical primitives and image moment invariants. A visual servoing controller is designed and the tracking performance is enhanced for robotic tracking systems. Experimental results on a 6-DOF robot system are provided to illustrate the effectiveness of the proposed scheme

    Generalised correlation higher order neural networks, neural network operation and Levenberg-Marquardt training on field programmable gate arrays

    Get PDF
    Higher Order Neural Networks (HONNs) were introduced in the late 80's as a solution to the increasing complexity within Neural Networks (NNs). Similar to NNs HONNs excel at performing pattern recognition, classification, optimisation particularly for non-linear systems in varied applications such as communication channel equalisation, real time intelligent control, and intrusion detection. This research introduced new HONNs called the Generalised Correlation Higher Order Neural Networks which as an extension to the ordinary first order NNs and HONNs, based on interlinked arrays of correlators with known relationships, they provide the NN with a more extensive view by introducing interactions between the data as an input to the NN model. All studies included two data sets to generalise the applicability of the findings. The research investigated the performance of HONNs in the estimation of short term returns of two financial data sets, the FTSE 100 and NASDAQ. The new models were compared against several financial models and ordinary NNs. Two new HONNs, the Correlation HONN (C-HONN) and the Horizontal HONN (Horiz-HONN) outperformed all other models tested in terms of the Akaike Information Criterion (AIC). The new work also investigated HONNs for camera calibration and image mapping. HONNs were compared against NNs and standard analytical methods in terms of mapping performance for three cases; 3D-to-2D mapping, a hybrid model combining HONNs with an analytical model, and 2D-to-3D inverse mapping. This study considered 2 types of data, planar data and co-planar (cube) data. To our knowledge this is the first study comparing HONNs against NNs and analytical models for camera calibration. HONNs were able to transform the reference grid onto the correct camera coordinate and vice versa, an aspect that the standard analytical model fails to perform with the type of data used. HONN 3D-to-2D mapping had calibration error lower than the parametric model by up to 24% for plane data and 43% for cube data. The hybrid model also had lower calibration error than the parametric model by 12% for plane data and 34% for cube data. However, the hybrid model did not outperform the fully non-parametric models. Using HONNs for inverse mapping from 2D-to-3D outperformed NNs by up to 47% in the case of cube data mapping. This thesis is also concerned with the operation and training of NNs in limited precision specifically on Field Programmable Gate Arrays (FPGAs). Our findings demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and FPGAs. This thesis also investigated the e�ffects of limited precision on the Back Propagation (BP) and Levenberg-Marquardt (LM) optimisation algorithms. Two new HONNs are compared against NNs for estimating the discrete XOR function and an optical waveguide sidewall roughness dataset in order to find the Minimum Precision for Lowest Error (MPLE) at which the training and operation are still possible. The new findings show that compared to NNs, HONNs require more precision to reach a similar performance level, and that the 2nd order LM algorithm requires at least 24 bits of precision. The final investigation implemented and demonstrated the LM algorithm on Field Programmable Gate Arrays (FPGAs) for the first time in our knowledge. It was used to train a Neural Network, and the estimation of camera calibration parameters. The LM algorithm approximated NN to model the XOR function in only 13 iterations from zero initial conditions with a speed-up in excess of 3 x 10^6 compared to an implementation in software. Camera calibration was also demonstrated on FPGAs; compared to the software implementation, the FPGA implementation led to an increase in the mean squared error and standard deviation of only 17.94% and 8.04% respectively, but the FPGA increased the calibration speed by a factor of 1:41 x 106
    corecore