2,600 research outputs found

    Motion estimation of planar curves and their alignment using visual servoing

    Get PDF
    Motion estimation and vision based control have been steadily improving research areas recently. Visual motion estimation is the determination of underlying motion parameters by using image data. Visual servoing on the other hand refers to the closed loop control of robotic systems using vision. Solving these problems with objects that have simple geometric features, such as points and lines is rather easy. However, these problems may imply certain challenges when we deal with curved objects that lack such simple features. This thesis proposes novel vision based estimation and control techniques that use object boundary information. Object boundaries are represented by planar algebraic curves. Decomposition of algebraic curves are used to extract features for motion estimation and visual servoing. Motion estimation algorithm uses the parameters of line factors resulting from the decomposition of the curve whereas visual servoing method employs the intersections of lines. Motion estimation algorithm is verified with several simulations and experiments. Visual servoing algorithm developed for the arbitrary alignment of a planar object is tested both with simulations on a 6 DOF Puma 560 robot and experiments on a 2 DOF SCARA robot. Results are quite promising

    Visual servoing with nonlinear observer

    Get PDF
    Visual servo system is a robot control system which incorporates the vision sensor in the feedback loop. Since the robot controller is also in the visual servo loop, compensation of the robot dynamics is important for high speed tasks. Moreover estimation of the object motion is necessary for real time tracking because the visual information includes considerable delay. This paper proposes a nonlinear model-based controller and a nonlinear observer for visual servoing. The observer estimates the object motion and the nonlinear controller makes the closed loop system asymptotically stable based on the estimated object motion. The effectiveness of the observer-based controller is verified by simulations and experiments on a two link planar direct drive robot</p

    Exploring Convolutional Networks for End-to-End Visual Servoing

    Full text link
    Present image based visual servoing approaches rely on extracting hand crafted visual features from an image. Choosing the right set of features is important as it directly affects the performance of any approach. Motivated by recent breakthroughs in performance of data driven methods on recognition and localization tasks, we aim to learn visual feature representations suitable for servoing tasks in unstructured and unknown environments. In this paper, we present an end-to-end learning based approach for visual servoing in diverse scenes where the knowledge of camera parameters and scene geometry is not available a priori. This is achieved by training a convolutional neural network over color images with synchronised camera poses. Through experiments performed in simulation and on a quadrotor, we demonstrate the efficacy and robustness of our approach for a wide range of camera poses in both indoor as well as outdoor environments.Comment: IEEE ICRA 201

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Visual Servoing from Deep Neural Networks

    Get PDF
    We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.Comment: fixed authors lis

    Image based visual servoing using algebraic curves applied to shape alignment

    Get PDF
    Visual servoing schemes generally employ various image features (points, lines, moments etc.) in their control formulation. This paper presents a novel method for using boundary information in visual servoing. Object boundaries are modeled by algebraic equations and decomposed as a unique sum of product of lines. We propose that these lines can be used to extract useful features for visual servoing purposes. In this paper, intersection of these lines are used as point features in visual servoing. Simulations are performed with a 6 DOF Puma 560 robot using Matlab Robotics Toolbox for the alignment of a free-form object. Also, experiments are realized with a 2 DOF SCARA direct drive robot. Both simulation and experimental results are quite promising and show potential of our new method

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ
    corecore