208 research outputs found

    Optimizing plane-to-plane positioning tasks by image-based visual servoing and structured light

    Get PDF
    This paper considers the problem of positioning an eye-in-hand system so that it gets parallel to a planar object. Our approach to this problem is based on linking to the camera a structured light emitter designed to produce a suitable set of visual features. The aim of using structured light is not only for simplifying the image processing and allowing lowtextured objects to be considered, but also for producing a control scheme with nice properties like decoupling, convergence and adequate camera trajectory. This paper focuses on an imagebased approach that achieves decoupling in all the workspace and for which the global convergence is ensured in perfect conditions. The behavior of the image-based approach is shown to be partially equivalent to a 3D visual servoing scheme but with a better robustness with respect to image noise. Concerning the robustness of the approach against calibration errors, it is demonstrated both analytically and experimentally

    Distance-based and Orientation-based Visual Servoing from Three Points

    Get PDF
    International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    High-Speed Vision and Force Feedback for Motion-Controlled Industrial Manipulators

    Get PDF
    Over the last decades, both force sensors and cameras have emerged as useful sensors for different applications in robotics. This thesis considers a number of dynamic visual tracking and control problems, as well as the integration of these techniques with contact force control. Different topics ranging from basic theory to system implementation and applications are treated. A new interface developed for external sensor control is presented, designed by making non-intrusive extensions to a standard industrial robot control system. The structure of these extensions are presented, the system properties are modeled and experimentally verified, and results from force-controlled stub grinding and deburring experiments are presented. A novel system for force-controlled drilling using a standard industrial robot is also demonstrated. The solution is based on the use of force feedback to control the contact forces and the sliding motions of the pressure foot, which would otherwise occur during the drilling phase. Basic methods for feature-based tracking and servoing are presented, together with an extension for constrained motion estimation based on a dual quaternion pose parametrization. A method for multi-camera real-time rigid body tracking with time constraints is also presented, based on an optimal selection of the measured features. The developed tracking methods are used as the basis for two different approaches to vision/force control, which are illustrated in experiments. Intensity-based techniques for tracking and vision-based control are also developed. A dynamic visual tracking technique based directly on the image intensity measurements is presented, together with new stability-based methods suitable for dynamic tracking and feedback problems. The stability-based methods outperform the previous methods in many situations, as shown in simulations and experiments

    Robotic Micromanipulation and Microassembly using Mono-view and Multi-scale visual servoing.

    No full text
    International audienceThis paper investigates sequential robotic micromanipulation and microassembly in order to build 3-D microsystems and devices. A mono-view and multiple scale 2-D visual control scheme is implemented for that purpose. The imaging system used is a photon video microscope endowed with an active zoom enabling to work at multiple scales. It is modelled by a non-linear projective method where the relation between the focal length and the zoom factor is explicitly established. A distributed robotic system (xy system, z system) with a twofingers gripping system is used in conjunction with the imaging system. The results of experiments demonstrate the relevance of the proposed approaches. The tasks were performed with the following accuracy: 1.4 m for the positioning error, and 0.5 for the orientation error

    Image-based visual servoing using improved image moments in 6-DOF robot systems

    Get PDF
    Visual servoing has played an important role in automated robotic manufacturing systems. This thesis will focus on this issue and proposes an improved method which includes an ameliorative image pre-processing (IP) algorithm and an amendatory IBVS algorithm As the first contribution, an improved IP algorithm based on the morphological theory has been discussed for the purpose of removing the unexpected speckles and balancing the illumination during the image processing. After this enhancing process, the useful information in the image becomes prominent and can be utilized to extract the accurate image features. Then, an improved IBVS algorithm is therefore further introduced for an eye-in-hand system as the second contribution. This eye-in-hand system includes a 6 Degree of Freedom (DOF) robot and a camera. The improved IBVS algorithm utilizes the image moment as the image features instead of detecting the special points for feature extraction in traditional IBVS. Comparing with traditional IBVS, choosing image moment as the image features can increase the stability of the system and extend the applied range of objects. The obtained image features will then be used to generate the control signals for the robot to track the target object. The Jacobian matrix describing the relationship between the motion of camera and velocity of image features is also discussed, where a new simple method has been proposed for the estimation of depth involved in the Jacobian matrix. In order to decouple the obtained Jacobian matrix for controlling the motion of camera with individual image features, a four stages sequence control has also been introduced to improve the control performance

    Vision-based Global Path Planning and Trajectory Generation for Robotic Applications in Hazardous Environments

    Get PDF
    The aim of this study is to ïŹnd an eïŹƒcient global path planning algorithm and trajectory generation method using a vision system. Path planning is part of the more generic navigation function of mobile robots that consists of establishing an obstacle-free path, starting from the initial pose to the target pose in the robot workspace.In this thesis, special emphasis is placed on robotic applications in industrial and scientiïŹc infrastructure environments that are hazardous and inaccessible to humans, such as nuclear power plants and ITER1 and CERN2 LHC3 tunnel. Nuclear radiation can cause deadly damage to the human body, but we have to depend on nuclear energy to meet our great demands for energy resources. Therefore, the research and development of automatic transfer robots and manipulations under nuclear environment are regarded as a key technology by many countries in the world. Robotic applications in radiation environments minimize the danger of radiation exposure to humans. However, the robots themselves are also vulnerable to radiation. Mobility and maneuverability in such environments are essential to task success. Therefore, an eïŹƒcient obstacle-free path and trajectory generation method are necessary for ïŹnding a safe path with maximum bounded velocities in radiation environments. High degree of freedom manipulators and maneuverable mobile robots with steerable wheels, such as non-holonomic omni-directional mobile robots make them suitable for inspection and maintenance tasks where the camera is the only source of visual feedback.In this thesis, a novel vision-based path planning method is presented by utilizing the artiïŹcial potential ïŹeld, the visual servoing concepts and the CAD-based recognition method to deal with the problem of path and trajectory planning. Unlike the majority of conventional trajectory planning methods that consider a robot as only one point, the entire shape of a mobile robot is considered by taking into account all of the robot’s desired points to avoid obstacles. The vision-based algorithm generates synchronized trajectories for all of the wheels on omni-directional mobile robot. It provides the robot’s kinematic variables to plan maximum allowable velocities so that at least one of the actuators is always working at maximum velocity. The advantage of generated synchronized trajectories is to avoid slippage and misalignment in translation and rotation movement. The proposed method is further developed to propose a new vision-based path coordination method for multiple mobile robots with independently steerable wheels to avoid mutual collisions as well as stationary obstacles. The results of this research have been published to propose a new solution for path and trajectory generation in hazardous and inaccessible to human environments where the one camera is the only source of visual feedback
    • 

    corecore