9 research outputs found

    Gestures for Manually Controlling a Helping Hand Robot

    Get PDF
    Helping hand robots have been the focus of a number of studies and have high potential in modern manufacturing processes and for use in daily living. As helping hand robots interact closely with users, it is important to find natural and intuitive user interfaces for interacting with the robots in various situations. This study describes a set of gestures for interacting with and controlling helping hand robots in situations in which users need to manually control the robot but both hands are not available, for example, when users are holding tools or objects in their hands. The gestures are derived from an experimental study that asked participants for gestures suitable for controlling primitive robot motions. The selected gestures can be used to control translation and orientation of an end effector of a helping hand robot when one or both hands are engaged with tasks. As an example for validating the proposed gestures, we implemented a helping hand robot system to perform a soldering task

    User-defined gestures for controlling primitive motions of an end effector

    No full text
    <div><p>In designing and developing a gesture recognition system, it is crucial to know the characteristics of a gesture selected to control, for example, an end effector of a robot arm. We conducted an experiment to collect a set of user-defined gestures and investigate characteristics of the gestures for controlling primitive motions of an end effector in human–robot collaboration. We recorded 152 gestures from 19 volunteers by presenting virtual robotic arm movements to the participants, and then asked the participants to think about and perform gestures that would <i>cause</i> the motions. It was found that the hands were the parts of the body used most often for gesture articulation even when the participants were holding tools and objects with both hands: a number of participants used one- and two-handed gestures interchangeably, gestures were consistently performed by the participants across all pairs of reversible gestures, and the participants expected better recognition performance for gestures that were easy to think of and perform. These findings are expected to be useful as guidelines in creating a gesture set for controlling robotic arms according to natural user behaviors.</p></div

    SSLvision: The shared vision system for the RoboCup Small Size League

    No full text
    Abstract. The current RoboCup Small Size League rules allow every team to set up their own global vision system as a primary sensor. This option, which is used by all participating teams, bears several organizational limitations and thus impairs the league’s progress. Additionally, most teams have converged on very similar solutions, and have produced only few significant research results to this global vision problem over the last years. Hence the responsible committees decided to migrate to a shared vision system (including also sharing the vision hardware) for all teams by 2010. This system – named SSL-Vision – is currently developed by volunteers from participating teams. In this paper, we describe the current state of SSL-Vision, i. e. its software architecture as well as the approaches used for image processing and camera calibration, together with the intended process for its introduction and its use beyond the scope of the Small Size League.
    corecore