665 research outputs found

    Dynamic Visual Servoing with an Uncalibrated Eye-in-Hand Camera

    Get PDF

    Efficient visual grasping alignment for cylinders

    Get PDF
    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available

    Adaptive Autonomous Navigation of Multiple Optoelectronic Microrobots in Dynamic Environments

    Get PDF
    The optoelectronic microrobot is an advanced light-controlled micromanipulation technology which has particular promise for collecting and transporting sensitive microscopic objects such as biological cells. However, wider application of the technology is currently limited by a reliance on manual control and a lack of methods for simultaneous manipulation of multiple microrobotic actuators. In this article, we present a computational framework for autonomous navigation of multiple optoelectronic microrobots in dynamic environments. Combining closed-loop visual-servoing, SLAM, real-time visual detection of microrobots and obstacles, dynamic path-finding and adaptive motion behaviors, this approach allows microrobots to avoid static and moving obstacles and perform a range of tasks in real-world dynamic environments. The capabilities of the system are demonstrated through micromanipulation experiments in simulation and in real conditions using a custom built optoelectronic tweezer system

    Autonomous vision-guided bi-manual grasping and manipulation

    Get PDF
    This paper describes the implementation, demonstration and evaluation of a variety of autonomous, vision-guided manipulation capabilities, using a dual-arm Baxter robot. Initially, symmetric coordinated bi-manual manipulation based on kinematic tracking algorithm was implemented on the robot to enable a master-slave manipulation system. We demonstrate the efficacy of this approach with a human-robot collaboration experiment, where a human operator moves the master arm along arbitrary trajectories and the slave arm automatically follows the master arm while maintaining a constant relative pose between the two end-effectors. Next, this concept was extended to perform dual-arm manipulation without human intervention. To this extent, an image-based visual servoing scheme has been developed to control the motion of arms for positioning them at a desired grasp locations. Next we combine this with a dynamic position controller to move the grasped object using both arms in a prescribed trajectory. The presented approach has been validated by performing numerous symmetric and asymmetric bi-manual manipulations at different conditions. Our experiments demonstrated 80% success rate in performing the symmetric dual-arm manipulation tasks; and 73% success rate in performing asymmetric dualarm manipulation tasks

    A Comparative Study between Analytic and Estimated Image Jacobian by Using a Stereoscopic System of Cameras

    Full text link
    This paper describes a comparative study of performance between the estimated image Jacobian that come from taking into account the epipolar geometry in a system of two cameras, and the well known analytic image Jacobian that is utilized for most applications in visual servoing. Image Based Visual Servoing architecture is used for controlling a 3 DOF articular system using two cameras in eye to hand configuration. Tests in static and dynamic cases were carried out, and showed that the performance of estimated Jacobian by using the properties of the epipolar geometry is such as good and robust against noise as the analytic Jacobian. This fact is considered as an advantage because the estimated Jacobian does not need laborious previous work prior to control task in contrast to the analytic Jacobian does

    Robotic micromanipulation for microassembly : modelling by sequencial function chart and achievement by multiple scale visual servoings.

    No full text
    International audienceThe paper investigates robotic assembly by focusing on the manipulation of microparts. This task is formalized through the notion of basic tasks which are organized in a logical sequence represented by a function chart and interpreted as the model of the behavior of the experimental setup. The latter includes a robotic system, a gripping system, an imaging system, and a clean environment. The imaging system is a photon videomicroscope able to work at multiple scales. It is modelled by a linear projective model where the relation between the scale factor and the magnification or zoom is explicitly established. So, the usual visual control law is modified in order to take into account this relation. The manipulation of some silicon microparts (400 μm×400 μm×100 μm) by means of a distributed robotic system (xyθ system, ϕz system), a two-finger gripping system and a controllable zoom and focus videomicroscope shows the relevance of the concepts. The 30 % of failure rate comes mainly from the physical phenomena (electrostatic and capillary forces) instead of the accuracy of control or the occultations of microparts

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Trajectory Servoing: Image-Based Trajectory Tracking without Absolute Positioning

    Get PDF
    The thesis describes an image based visual servoing (IBVS) system for a non-holonomic robot to achieve good trajectory following without real-time robot pose information and without a known visual map of the environment. We call it trajectory servoing. The critical component is a feature based, indirect SLAM method to provide a pool of available features with estimated depth and covariance, so that they may be propagated forward in time to generate image feature trajectories with uncertainty information for visual servoing. Short and long distance experiments show the benefits of trajectory servoing for navigating unknown areas without absolute positioning. Trajectory servoing is shown to be more accurate than SLAM pose-based feedback and further improved by a weighted least square controller using covariance from the underlying SLAM system.M.S

    Vision-based self-calibration and control of parallel kinematic mechanisms without proprioceptive sensing

    Get PDF
    International audienceThis work is a synthesis of our experience over parallel kinematic machine control, which aims at changing the standard conceptual approach to this problem. Indeed, since the task space, the state space and the measurement space can coincide in this class of mechanism, we came to redefine the complete modeling, identification and control methodology. Thus, it is shown in this paper that, generically and with the help of sensor-based control, this methodology does not require any joint measurement, thus opening a path to simplified mechanical design and reducing the number of kinematic parameters to identify. This novel approach was validated on the reference parallel kinematic mechanism (the Gough-Stewart platform) with vision as the exteroceptive sensor
    corecore