655 research outputs found

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions

    Visual Servoing from Deep Neural Networks

    Get PDF
    We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.Comment: fixed authors lis

    Effective Target Aware Visual Navigation for UAVs

    Full text link
    In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target re-projection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR) 201

    Visual servoing by partitioning degrees of freedom

    Get PDF
    There are many design factors and choices when mounting a vision system for robot control. Such factors may include the kinematic and dynamic characteristics in the robot's degrees of freedom (DOF), which determine what velocities and fields-of-view a camera can achieve. Another factor is that additional motion components (such as pan-tilt units) are often mounted on a robot and introduce synchronization problems. When a task does not require visually servoing every robot DOF, the designer must choose which ones to servo. Questions then arise as to what roles, if any, do the remaining DOF play in the task. Without an analytical framework, the designer resorts to intuition and try-and-see implementations. This paper presents a frequency-based framework that identifies the parameters that factor into tracking. This framework gives design insight which was then used to synthesize a control law that exploits the kinematic and dynamic attributes of each DOF. The resulting multi-input multi-output control law, which we call partitioning, defines an underlying joint coupling to servo camera motions. The net effect is that by employing both visual and kinematic feedback loops, a robot can quickly position and orient a camera in a large assembly workcell. Real-time experiments tracking people and robot hands are presented using a 5-DOF hybrid (3-DOF Cartesian gantry plus 2-DOF pan-tilt unit) robot

    Image-based Visual Servoing of a Gough-Stewart Parallel Manipulator using Leg Observations

    Get PDF
    International audienceIn this paper, a tight coupling between computer vision and paral- lel robotics is exhibited through the projective line geometry. Indeed, contrary to the usual methodology where the robot is modeled indepen- dently from the control law which will be implemented, we take into ac- count, since the early modeling stage, that vision will be used for con- trol. Hence, kinematic modeling and projective geometry are fused into a control-devoted projective kinematic model. Thus, a novel vision-based kinematic modeling of a Gough-Stewart manipulator is proposed through the image projection of its cylindrical legs. Using this model, a visual ser- voing scheme is presented, where the image projection of the non-rigidly linked legs are servoed, rather than the end-effector pose

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    Image space trajectory tracking of 6-DOF robot manipulator in assisting visual servoing

    Get PDF
    As vision is a versatile sensor, vision-based control of robot is becoming more important in industrial applications. The control signal generated using the traditional control algorithms leads to undesirable movement of the end-effector during the positioning task. This movement may sometimes cause task failure due to visibility loss. In this paper, a sliding mode controller (SMC) is designed to track 2D image features in an image-based visual servoing task. The feature trajectory tracking helps to keep the image features always in the camera field of view and thereby ensures the shortest trajectory of the end-effector. SMC is the right choice to handle the depth uncertainties associated with translational motion. Stability of the closed-loop system with the proposed controller is proved by the Lyapunov method. Three feature trajectories are generated to test the efficacy of the proposed method. Simulation tests are conducted and the superiority of the proposed method over a Proportional Derivative – Sliding Mode Controller (PD-SMC) in terms of settling time and distance travelled by the end-effector is established in the presence and absence of depth uncertainties. The proposed controller is also tested in real-time by integrating the visual servoing system with a 6-DOF industrial robot manipulator, ABB IRB 1200
    corecore