1,946 research outputs found

    Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach

    Full text link
    We present a robot eye-hand coordination learning method that can directly learn visual task specification by watching human demonstrations. Task specification is represented as a task function, which is learned using inverse reinforcement learning(IRL) by inferring differential rewards between state changes. The learned task function is then used as continuous feedbacks in an uncalibrated visual servoing(UVS) controller designed for the execution phase. Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification. It can also provide task interpretability by directly approximating the task function. Besides, benefiting from the use of a traditional UVS controller, our training process is efficient and the learned policy is independent from a particular robot platform. Various experiments were designed to show that, for a certain DOF task, our method can adapt to task/environment variances in target positions, backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201

    Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary

    Full text link
    The complex physical properties of highly deformable materials such as clothes pose significant challenges fanipulation systems. We present a novel visual feedback dictionary-based method for manipulating defoor autonomous robotic mrmable objects towards a desired configuration. Our approach is based on visual servoing and we use an efficient technique to extract key features from the RGB sensor stream in the form of a histogram of deformable model features. These histogram features serve as high-level representations of the state of the deformable material. Next, we collect manipulation data and use a visual feedback dictionary that maps the velocity in the high-dimensional feature space to the velocity of the robotic end-effectors for manipulation. We have evaluated our approach on a set of complex manipulation tasks and human-robot manipulation tasks on different cloth pieces with varying material characteristics.Comment: The video is available at goo.gl/mDSC4

    Positioning and trajectory following tasks in microsystems using model free visual servoing

    Get PDF
    In this paper, we explore model free visual servoing algorithms by experimentally evaluating their performances for various tasks performed on a microassembly workstation developed in our lab. Model free or so called uncalibrated visual servoing does not need the system calibration (microscope-camera-micromanipulator) and the model of the observed scene. It is robust to parameter changes and disturbances. We tested its performance in point-to-point positioning and various trajectory following tasks. Experimental results validate the utility of model free visual servoing in microassembly tasks

    Visual servo control on a humanoid robot

    Get PDF
    Includes bibliographical referencesThis thesis deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve con- figuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this thesis explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO

    Learning visual docking for non-holonomic autonomous vehicles

    Get PDF
    This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the ā€œadjoining propertyā€, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm

    Markerless visual servoing on unknown objects for humanoid robot platforms

    Full text link
    To precisely reach for an object with a humanoid robot, it is of central importance to have good knowledge of both end-effector, object pose and shape. In this work we propose a framework for markerless visual servoing on unknown objects, which is divided in four main parts: I) a least-squares minimization problem is formulated to find the volume of the object graspable by the robot's hand using its stereo vision; II) a recursive Bayesian filtering technique, based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose (position and orientation) of the robot's end-effector without the use of markers; III) a nonlinear constrained optimization problem is formulated to compute the desired graspable pose about the object; IV) an image-based visual servo control commands the robot's end-effector toward the desired pose. We demonstrate effectiveness and robustness of our approach with extensive experiments on the iCub humanoid robot platform, achieving real-time computation, smooth trajectories and sub-pixel precisions
    • ā€¦
    corecore