543 research outputs found

    Positioning and trajectory following tasks in microsystems using model free visual servoing

    Get PDF
    In this paper, we explore model free visual servoing algorithms by experimentally evaluating their performances for various tasks performed on a microassembly workstation developed in our lab. Model free or so called uncalibrated visual servoing does not need the system calibration (microscope-camera-micromanipulator) and the model of the observed scene. It is robust to parameter changes and disturbances. We tested its performance in point-to-point positioning and various trajectory following tasks. Experimental results validate the utility of model free visual servoing in microassembly tasks

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions

    Vision-based control of a knuckle boom crane with online cable length estimation

    Full text link
    A vision-based controller for a knuckle boom crane is presented. The controller is used to control the motion of the crane tip and at the same time compensate for payload oscillations. The oscillations of the payload are measured with three cameras that are fixed to the crane king and are used to track two spherical markers fixed to the payload cable. Based on color and size information, each camera identifies the image points corresponding to the markers. The payload angles are then determined using linear triangulation of the image points. An extended Kalman filter is used for estimation of payload angles and angular velocity. The length of the payload cable is also estimated using a least squares technique with projection. The crane is controlled by a linear cascade controller where the inner control loop is designed to damp out the pendulum oscillation, and the crane tip is controlled by the outer loop. The control variable of the controller is the commanded crane tip acceleration, which is converted to a velocity command using a velocity loop. The performance of the control system is studied experimentally using a scaled laboratory version of a knuckle boom crane

    Generalization of reference filtering control strategy for 2D/3D visual feedback control of industrial robot manipulators

    Full text link
    This is an Author's Accepted Manuscript of an article published in Solanes, J. E., Munoz-Benavent, P., Armesto, L., Gracia, L., & Tornero, J. (2022). Generalization of reference filtering control strategy for 2D/3D visual feedback control of industrial robot manipulators. International Journal of Computer Integrated Manufacturing, 35(3), 229-246, 2021 Informa UK Limited, trading as Taylor & Francis Group, available online at: http://www.tandfonline.com/10.1080/0951192X.2021.1973108.[EN] This paper develops the application of the Dual Rate Dual Sampling Reference Filtering Control Strategy to 2D and 3D visual feedback control. This strategy allows to overcome the problem of sensor latency and to address the problem of control task failure due to visual features leaving the camera field of view. In particular, a Dual Rate Kalman Filter is used to generate inter-sample estimations of the visual features to deal with the problem of vision sensor latency, whereas a Dual Rate Extended Kalman Filter Smoother is used to generate more convenient visual features trajectories in the image plane. Both 2D and 3D visual feedback control approaches are widely analyzed throughout the paper, as well as the overall system performance using different visual feedback controllers, providing a set of results that highlight the improvements in terms of solution reachability, robustness, and time domain response. The proposed control strategy has been validated on an industrial system with hard real-time limitations, consisting of a 6 DOF industrial manipulator, a 5 MP camera, and a PLC as controller.This work was supported in part by the Spanish Government under the projects PID2020-117421RB-C21 and PID2020116585GB-I00, and in part by the Generalitat Valenciana under the project GV/2021/181.Solanes, JE.; Muñoz-Benavent, P.; Armesto, L.; Gracia Calandin, LI.; Tornero Montserrat, J. (2022). Generalization of reference filtering control strategy for 2D/3D visual feedback control of industrial robot manipulators. International Journal of Computer Integrated Manufacturing. 35(3):229-246. https://doi.org/10.1080/0951192X.2021.197310822924635

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    A Novel Uncalibrated Visual Servoing Controller Baesd on Model-Free Adaptive Control Method with Neural Network

    Full text link
    Nowadays, with the continuous expansion of application scenarios of robotic arms, there are more and more scenarios where nonspecialist come into contact with robotic arms. However, in terms of robotic arm visual servoing, traditional Position-based Visual Servoing (PBVS) requires a lot of calibration work, which is challenging for the nonspecialist to cope with. To cope with this situation, Uncalibrated Image-Based Visual Servoing (UIBVS) frees people from tedious calibration work. This work applied a model-free adaptive control (MFAC) method which means that the parameters of controller are updated in real time, bringing better ability of suppression changes of system and environment. An artificial intelligent neural network is applied in designs of controller and estimator for hand-eye relationship. The neural network is updated with the knowledge of the system input and output information in MFAC method. Inspired by "predictive model" and "receding-horizon" in Model Predictive Control (MPC) method and introducing similar structures into our algorithm, we realizes the uncalibrated visual servoing for both stationary targets and moving trajectories. Simulated experiments with a robotic manipulator will be carried out to validate the proposed algorithm.Comment: 16 pages, 8 figure

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    Feature-based motion control for near-repetitive structures

    Get PDF
    In many manufacturing processes, production steps are carried out on repetitive structures which consist of identical features placed in a repetitive pattern. In the production of these repetitive structures one or more consecutive steps are carried out on the features to create the final product. Key to obtaining a high product quality is to position the tool with respect to each feature of the repetitive structure with a high accuracy. In current industrial practice, local position sensors such as motor encoders are used to separately measure the metric position of the tool and the stage where the repetitive structure is on. Here, the final accuracy of alignment directly relies on assumptions like thermal stability, infinite machine frame stiffness and constant pitch between successive features. As the size of these repetitive structures is growing, often these assumptions are difficult to satisfy in practice. The main goal of this thesis is to design control approaches for accurately positioning the tool with respect to the features, without the need of the aforementioned assumptions. In this thesis, visual servoing, i.e., using machine vision data in the servo loop to control the motion of a system, is used for controlling the relative position between the tool and the features. By using vision as a measurement device the relevant dynamics and disturbances are therefore measurable and can be accounted for in a non-collocated control setting. In many cases, the pitch between features is subject to small imperfections, e.g., due to the finite accuracy of preceding process steps or thermal expansion. Therefore, the distance between two features is unknown a priori, such that setpoints can not be constructed a priori. In this thesis, a novel feature-based position measurement is proposed, with the advantage that the feature-based target position of every feature is known a priori. Motion setpoints can be defined from feature to feature without knowing the exact absolute metric position of the features beforehand. Next to feature-to-feature movements, process steps involving movements with respect to the features, e.g., engraving or cutting, are implemented to increase the versatility of the movements. Final positioning accuracies of 10 µm are attained. For feature-to-feature movements with varying distances between the features a novel feedforward control strategy is developed based on iterative learning control (ILC) techniques. In this case, metric setpoints from feature to feature are constructed by scaling a nominal setpoint to handle the pitch imperfections. These scale varying setpoints will be applied during the learning process, while second order ILC is used to relax the classical ILC boundary of setpoints being the same every trial. The final position accuracy is within 5 µm, while scale varying setpoints are applied. The proposed control design approaches are validated in practice on an industrial application, where the task is to position a tool with respect to discrete semiconductors of a wafer. A visual servoing setup capable of attaining a 1 kHz frame rate is realized. It consists of an xy-stage on which a wafer is clamped which contains the discrete semiconductor products. A camera looks down onto the wafer and is used for position feedback. The time delay of the system is 2.5 ms and the variation of the position measurement is 0.3 µm (3s)

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints
    • …
    corecore