240 research outputs found

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Obstacle avoidance by changing running path for an autonomous running vehicle applying visual servoing

    Get PDF
    This paper describes an improved running control algorithm based on the visual servoing in consideration of the turning back of a running path to avoid an obstacle on the path by changing the running path. This paper also describes an experimental autonomous running vehicle to demonstrate the algorithm. As a vision sensor, the vehicle equips with a video-rate stereo rangefinder which processes color images from stereo CCD cameras and is developed in the authors' laboratory. From the several basic autonomous running experiments, it is concluded that the experimental vehicle runs smoothly any planned path composed of several teaching routes by transferring routes. It is also concluded that the vehicle can turn back on a path including turning back of route transference</p

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Intention recognition for gaze controlled robotic minimally invasive laser ablation

    Get PDF
    Eye tracking technology has shown promising results for allowing hands-free control of robotically-mounted cameras and tools. However existing systems present only limited capabilities in allowing the full range of camera motions in a safe, intuitive manner. This paper introduces a framework for the recognition of surgeon intention, allowing activation and control of the camera through natural gaze behaviour. The system is resistant to noise such as blinking, while allowing the surgeon to look away safely at any time. Furthermore, this paper presents a novel approach to control the translation of the camera along its optical axis using a combination of eye tracking and stereo reconstruction. Combining eye tracking and stereo reconstruction allows the system to determine which point in 3D space the user is fixating, enabling a translation of the camera to achieve the optimal viewing distance. In addition, the eye tracking information is used to perform automatic laser targeting for laser ablation. The desired target point of the laser, mounted on a separate robotic arm, is determined with the eye tracking thus removing the need to manually adjust the laser's target point before starting each new ablation. The calibration methodology used to obtain millimetre precision for the laser targeting without the aid of visual servoing is described. Finally, a user study validating the system is presented, showing clear improvement with median task times under half of those of a manually controlled robotic system

    Vision-based grasping of unknown objects to improve disabled people autonomy.

    Get PDF
    International audienceThis paper presents our contribution to vision based robotic assistance for people with disabilities. The rehabilitative robotic arms currently available on the market are directly controlled by adaptive devices, which lead to increasing strain on the user's disability. To reduce the need for user's actions, we propose here several vision-based solutions to automatize the grasping of unknown objects. Neither appearance data bases nor object models are considered. All the needed information is computed on line. This paper focuses on the positioning of the camera and the gripper approach. For each of those two steps, two alternative solutions are provided. All the methods have been tested and validated on robotics cells. Some have already been integrated into our mobile robot SAM

    Real-Time Stereo Visual Servoing of a 6-DOF Robot for Tracking and Grasping Moving Objects

    Get PDF
    Robotic systems have been increasingly employed in various industrial, urban, mili-tary and exploratory applications during last decades. To enhance the robot control per-formance, vision data are integrated into the robot control systems. Using visual feedback has a great potential for increasing the flexibility of conventional robotic and mechatronic systems to deal with changing and less-structured environments. How to use visual in-formation in control systems has always been a major research area in robotics and mechatronics. Visual servoing methods which utilize direct feedback from image features to motion control have been proposed to handle many stability and reliability issues in vision-based control systems. This thesis introduces a stereo Image-based Visual Servoing (IBVS) (to the contrary Position-based Visual Servoing (PBVS)) with eye‐in‐hand configuration that is able to track and grasp a moving object in real time. The robustness of the control system is in-creased by the means of accurate 3-D information extracted from binocular images. At first, an image-based visual servoing (IBVS) approach based on stereo vision is proposed for 6 DOF robots. A classical proportional control strategy has been designed and the ste-reo image interaction matrix which relates the image feature velocity to the cameras’ ve-locity screw has been developed for two cases of parallel and non-parallel cameras in-stalled on the end-effector of the robot. Then, the properties of tracking a moving target and corresponding variant feature points on visual servoing system has been investigated. Second, a method for position prediction and trajectory estimation of the moving tar-get in order to use in the proposed image-based stereo visual servoing for a real-time grasping task has been proposed and developed through the linear and nonlinear model-ing of the system dynamics. Three trajectory estimation algorithms, “Kalman Filter”, “Recursive Least Square (RLS)” and “Extended Kalman Filter (EKF)” have been applied to predict the position of moving object in image planes. Finally, computer simulations and real implementation have been carried out to verify the effectiveness of the proposed method for the task of tracking and grasping a moving object using a 6-DOF manipulator

    Robust visual servoing in 3-D reaching tasks

    Full text link
    • 

    corecore