432 research outputs found

    Unfalsified visual servoing for simultaneous object recognition and pose tracking

    Get PDF
    In a complex environment, simultaneous object recognition and tracking has been one of the challenging topics in computer vision and robotics. Current approaches are usually fragile due to spurious feature matching and local convergence for pose determination. Once a failure happens, these approaches lack a mechanism to recover automatically. In this paper, data-driven unfalsified control is proposed for solving this problem in visual servoing. It recognizes a target through matching image features with a 3-D model and then tracks them through dynamic visual servoing. The features can be falsified or unfalsified by a supervisory mechanism according to their tracking performance. Supervisory visual servoing is repeated until a consensus between the model and the selected features is reached, so that model recognition and object tracking are accomplished. Experiments show the effectiveness and robustness of the proposed algorithm to deal with matching and tracking failures caused by various disturbances, such as fast motion, occlusions, and illumination variation

    Virtual Environment for Development of Visual Servoing Control Algorithms

    Get PDF
    Our problem considered was whether a virtual environment could be used for development of visual servoing control algorithms. We have used a virtual environment for the comparison of several kinds of controllers. The virtual environment is done in Java, and it consists of two industrial robots, IRB 2000 and IRB 6, a camera stereo system with two cameras mounted on the end-effector of the IRB 6, and one rolling ball and one bar. The experiment consists of tracking and grasping the ball using the different controllers. The robot IRB 2000 should grasp the rolling ball. The control of the robot is done in Matlab. We have three controllers. These controllers are function of the difference between the ball and the gripper. First, we use P-controller with a proportional gain. Second, the image-based Jacobian control is used but this controller needs an improvement because the robot tracks the ball with a little delay, then we use this controller with feedforward. The robot grasps the ball when the error between the ball and the gripper is less than one tolerance. In these two controllers, the depth is calculated with the two cameras (stereovision), therefore cameras need to be calibrated. Third, the hybrid controller is used. It is a mix of image-based and position-based controller. We use X and Y in Image space and Z in Cartesian space. Now, the 3D reconstruction is done from motion. It means we do not need calibrated cameras and the depth is calculated with adaptive control techniques. This adaptive control is used for recovering on-line the velocity of the ball. When the estimation of the ball is stable, the robot starts tracking the ball

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Trajectory Servoing: Image-Based Trajectory Tracking Using SLAM

    Full text link
    This paper describes an image based visual servoing (IBVS) system for a nonholonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature-based, indirect SLAM method to provide a pool of available features with estimated depth, so that they may be propagated forward in time to generate image feature trajectories for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than pose-based feedback when both rely on the same underlying SLAM system

    A Deep Neural Network Sensor for Visual Servoing in 3D Spaces

    Get PDF

    Trajectory Servoing: Image-Based Trajectory Tracking without Absolute Positioning

    Get PDF
    The thesis describes an image based visual servoing (IBVS) system for a non-holonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature based, indirect SLAM method to provide a pool of available features with estimated depth and covariance, so that they may be propagated forward in time to generate image feature trajectories with uncertainty information for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than SLAM pose-based feedback and further improved by a weighted least square controller using covariance from the underlying SLAM system.M.S

    Adaptive Shape Servoing of Elastic Rods using Parameterized Regression Features and Auto-Tuning Motion Controls

    Full text link
    In this paper, we present a new vision-based method to control the shape of elastic rods with robot manipulators. Our new method computes parameterized regression features from online sensor measurements that enable to automatically quantify the object's configuration and establish an explicit shape servo-loop. To automatically deform the rod into a desired shape, our adaptive controller iteratively estimates the differential transformation between the robot's motion and the relative shape changes; This valuable capability allows to effectively manipulate objects with unknown mechanical models. An auto-tuning algorithm is introduced to adjust the robot's shaping motion in real-time based on optimal performance criteria. To validate the proposed theory, we present a detailed numerical and experimental study with vision-guided robotic manipulators.Comment: 13 pages, 22 figures, 2 table

    Two solutions to the adaptive visual servoing problem

    No full text
    Published versio
    • …
    corecore