1 research outputs found

    Visual Gesture-Based Robot Guidance With a Modular Neural System

    No full text
    We report on the development of the modular neural system "SeeEagle" for the visual guidance of robot pick-and-place actions. Several neural networks are integrated to a single system that visually recognizes human hand pointing gestures from stereo pairs of color video images. The output of the hand recognition stage is processed by a set of color-sensitive neural networks to determine the cartesian location of the target object that is referenced by the pointing gesture. Finally, this information is used to guide a robot to grab the target object and put it at another location that can be specified by a second pointing gesture. The accuracy of the current system allows to identify the location of the referenced target object to an accuracy of 1 cm in a workspace area of 50x50cm. In our current environment, this is sufficient to pick and place arbitrarily positioned target objects within the workspace. The system consists of neural networks that perform the tasks of ima..
    corecore