25,578 research outputs found

    Use of 3D vision for fine robot motion

    Get PDF
    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems

    Multi-cue 3D Object Recognition in Knowledge-based Vision-guided Humanoid Robot System

    Get PDF
    Abstract — A vision based object recognition subsystem on knowledge-based humanoid robot system is presented. Humanoid robot system for real world service application must integrate an object recognition subsystem and a motion planning subsystem in both mobility and manipulation tasks. These requirements involve the vision system capable of self-localization for navigation tasks and object recognition for manipulation tasks, while communicating with the motion planning subsystem. In this paper, we describe a design and implementation of knowledge based visual 3D object recognition system with multi-cue integration using particle filter technique. The particle filter provides very robust object recognition performance and knowledge based approach enables robot to perform both object localization and self localization with movable/fixed information. Since this object recognition subsystem share knowledge with a motion planning subsystem, we are able to generate vision-guided humanoid behaviors without considering visual processing functions. Finally, in order to demonstrate the generality of the system, we demonstrated several vision-based humanoid behavior experiments in a daily life environment. Fig. 1. system Behavior example of knowledge based vision guided humanoid I

    Articulated Object Tracking from Visual Sensory Data for Robotic Manipulation

    Get PDF
    Roboti juhtimine liigestatud objekti manipuleerimisel vajab robustset ja täpsetobjekti oleku hindamist. Oleku hindamise tulemust kasutatakse tagasisidena vastavate roboti liigutuste arvutamisel soovitud manipulatsiooni tulemuse saavutamiseks. Selles töös uuritakse robootilise manipuleerimise visuaalse tagasiside teostamist. Tehisnägemisele põhinevat servode liigutamist juhitakse ruutplaneerimise raamistikus võimaldamaks humanoidsel robotil läbi viia objekti manipulatsiooni. Esitletakse tehisnägemisel põhinevat liigestatud objekti oleku hindamise meetodit. Me näitame väljapakutud meetodi efektiivsust mitmel erineval eksperimendil HRP-4 humanoidse robotiga. Teeme ka ettepaneku ühendada masinõppe ja serva tuvastamise tehnikad liigestatud objekti manipuleerimise markeerimata visuaalse tagasiside teostamiseks reaalajas.In order for a robot to manipulate an articulated object, it needs to know itsstate (i.e. its pose); that is to say: where and in which configuration it is. Theresult of the object’s state estimation is to be provided as a feedback to the control to compute appropriate robot motion and achieve the desired manipulation outcome. This is the main topic of this thesis, where articulated object state estimation is solved using visual feedback. Vision based servoing is implemented in a Quadratic Programming task space control framework to enable humanoid robot to perform articulated objects manipulation. We thoroughly developed our methodology for vision based articulated object state estimation on these bases.We demonstrate its efficiency by assessing it on several real experiments involving the HRP-4 humanoid robot. We also propose to combine machine learning and edge extraction techniques to achieve markerless, realtime and robust visual feedback for articulated object manipulation

    Deep Object-Centric Representations for Generalizable Robot Learning

    Full text link
    Robotic manipulation in complex open-world scenarios requires both reliable physical manipulation skills and effective and generalizable perception. In this paper, we propose a method where general purpose pretrained visual models serve as an object-centric prior for the perception system of a learned policy. We devise an object-level attentional mechanism that can be used to determine relevant objects from a few trajectories or demonstrations, and then immediately incorporate those objects into a learned policy. A task-independent meta-attention locates possible objects in the scene, and a task-specific attention identifies which objects are predictive of the trajectories. The scope of the task-specific attention is easily adjusted by showing demonstrations with distractor objects or with diverse relevant objects. Our results indicate that this approach exhibits good generalization across object instances using very few samples, and can be used to learn a variety of manipulation tasks using reinforcement learning

    DoorGym: A Scalable Door Opening Environment And Baseline Agent

    Full text link
    In order to practically implement the door opening task, a policy ought to be robust to a wide distribution of door types and environment settings. Reinforcement Learning (RL) with Domain Randomization (DR) is a promising technique to enforce policy generalization, however, there are only a few accessible training environments that are inherently designed to train agents in domain randomized environments. We introduce DoorGym, an open-source door opening simulation framework designed to utilize domain randomization to train a stable policy. We intend for our environment to lie at the intersection of domain transfer, practical tasks, and realism. We also provide baseline Proximal Policy Optimization and Soft Actor-Critic implementations, which achieves success rates between 0% up to 95% for opening various types of doors in this environment. Moreover, the real-world transfer experiment shows the trained policy is able to work in the real world. Environment kit available here: https://github.com/PSVL/DoorGym/Comment: Full version (Real world transfer experiments result
    corecore