35 research outputs found

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Towards Practical Visual Servoing in Robotics

    Full text link

    Euclidean Structure from Uncalibrated Images

    Full text link

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Planning and Control of Mobile Robots in Image Space from Overhead Cameras

    Get PDF
    In this work, we present a framework for the development of a planar mobile robot controller based on image plane feedback. We show that the design of such a motion controller can be accomplished in the image plane by making use of a subset of the parameters that relate the image plane to the ground plane, while still leveraging the simplifications offered by modeling the system as a differentially flat system. Our method relies on a waypoint-based trajectory generator, with all the waypoints specified in the image, as seen by an overhead observer. We present some results from simulation as well as from experiments that validate the ideas presented in this work and discuss some ideas for future wor

    Image Based Visual Servoing Using Trajectory Planning and Augmented Visual Servoing Controller

    Get PDF
    Robots and automation manufacturing machineries have become an inseparable part of industry, nowadays. However, robotic systems are generally limited to operate in highly structured environments. Although, sensors such as laser tracker, indoor GPS, 3D metrology and tracking systems are used for positioning and tracking in manufacturing and assembly tasks, these devices are highly limited to the working environment and the speed of operation and they are generally very expensive. Thus, integration of vision sensors with robotic systems and generally visual servoing system allows the robots to work in unstructured spaces, by producing non-contact measurements of the working area. However, projecting a 3D space into a 2D space, which happens in the camera, causes the loss of one dimension data. This initiates the challenges in vision based control. Moreover, the nonlinearities and complex structure of a manipulator robot make the problem more challenging. This project aims to develop new reliable visual servoing methods that allow its use in real robotic tasks. The main contributions of this project are in two parts; the visual servoing controller and trajectory planning algorithm. In the first part of the project, a new image based visual servoing controller called Augmented Image Based Visual Servoing (AIBVS) is presented. A proportional derivative (PD) controller is developed to generate acceleration as the controlling command of the robot. The stability analysis of the controller is conducted using Lyapanov theory. The developed controller has been tested on a 6 DOF Denso robot. The experimental results on point features and image moment features demonstrate the performance of the proposed AIBVS. Experimental results show that a damped response could be achieved using a PD controller with acceleration output. Moreover, smoother feature and robot trajectories are observed compared to those in conventional IBVS controllers. Later on, this controller is used on a moving object catching process. Visual servoing controllers have shown difficulty in stabilizing the system in global space. Hence, in the second part of the project, a trajectory planning algorithm is developed to achieve the global stability of the system. The trajectory planning is carried out by parameterizing the camera's velocity screw. The camera's velocity screw is parameterized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile guides the robot to its desired position. This is done by minimizing the error between the initial and desired features. This method provides a reliable path for the robot considering all robotic constraints. The developed algorithm is tested on a Denso robot. The results show that the trajectory planning algorithm is able to perform visual servoing tasks which are unstable when performed using visual servoing controllers

    Euclidean Calculation of Feature Points of a Rotating Satellite: A Daisy Chaining Approach

    Full text link
    The occlusion of feature points and/or feature points leaving the field of view of a camera is a significant practical problem that can lead to degraded performance or instability of visual servo control and vision-based estimation algorithms. By assuming that one knownEuclidean distance between two feature points in an initial view is available, homography relationships and image geometry are used in this paper to determine the Euclidean coordinates of feature points in the field of view. A new daisy-chainingmethod is then used to relate the position and orientation of a plane defined by the feature points to other feature-point planes that are rigidly connected. Through these relationships, the Euclidean coordinates of the original feature points can be tracked even as they leave the field of view. This objective is motivated by the desire to track the Euclidean coordinates of feature points on one face of a satellite as it continually rotates and feature points become self-occluded. A numerical simulation is included to demonstrate that the Euclidean coordinates can be tracked even when they leave the field of view. However, the results indicate the need for amethod to reconcile any accumulated errorwhen the feature points return to thefield of view. Nomenclature A = intrinsic camera-calibration matrix dj = distance to j plane along nj F j, Fj = frames attached to the j and j planes Gj = projective homography matrix of the jth frame Hj = Euclidean homography matrix of the jth frame I = fixed coordinate frame attached to the camera mji, m ji = normalized Euclidean coordinate of the ith feature point of the j and j planes expressed in
    corecore