1,269 research outputs found

    Tele-operation and Human Robots Interactions

    Get PDF

    Iconic gestures for robot avatars, recognition and integration with speech

    Get PDF
    © 2016 Bremner and Leonards. Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances

    Robot mediated communication: Enhancing tele-presence using an avatar

    Get PDF
    In the past few years there has been a lot of development in the field of tele-presence. These developments have caused tele-presence technologies to become easily accessible and also for the experience to be enhanced. Since tele-presence is not only used for tele-presence assisted group meetings but also in some forms of Computer Supported Cooperative Work (CSCW), these activities have also been facilitated. One of the lingering issues has to do with how to properly transmit presence of non-co-located members to the rest of the group. Using current commercially available tele-presence technology it is possible to exhibit a limited level of social presence but no physical presence. In order to cater for this lack of presence a system is implemented here using tele-operated robots as avatars for remote team members and had its efficacy tested. This testing includes both the level of presence that can be exhibited by robot avatars but also how the efficacy of these robots for this task changes depending on the morphology of the robot. Using different types of robots, a humanoid robot and an industrial robot arm, as tele-presence avatars, it is found that the humanoid robot using an appropriate control system is better at exhibiting a social presence. Further, when compared to a voice only scenario, both robots proved significantly better than with only voice in terms of both cooperative task solving and social presence. These results indicate that using an appropriate control system, a humanoid robot can be better than an industrial robot in these types of tasks and the validity of aiming for a humanoid design behaving in a human-like way in order to emulate social interactions that are closer to human norms. This has implications for the design of autonomous socially interactive robot systems

    A modular approach for remote operation of humanoid robots in search and rescue scenarios

    Get PDF
    In the present work we have designed and implemented a modular, robust and user-friendly Pilot Interface meant to control humanoid robots in rescue scenarios during dangerous missions. We follow the common approach where the robot is semi-autonomous and it is remotely controlled by a human operator. In our implementation, YARP is used both as a communication channel for low-level hardware components and as an interconnecting framework between control modules. The interface features the capability to receive the status of these modules continuously and request actions when required. In addition, ROS is used to retrieve data from different types of sensors and to display relevant information of the robot status such as joint positions, velocities and torques, force/torque measurements and inertial data. Furthermore the operator is immersed into a 3D reconstruction of the environment and is enabled to manipulate 3D virtual objects. The Pilot Interface allows the operator to control the robot at three different levels. The high-level control deals with human-like actions which involve the whole robot’s actuation and perception. For instance, we successfully teleoperated IIT’s COmpliant huMANoid (COMAN) platform to execute complex navigation tasks through the composition of elementary walking commands (e.g.[walk_forward, 1m]). The mid-level control generates tasks in cartesian space, based on the position and orientation of objects of interest (i.e. valve, door handle) w.r.t. a reference frame on the robot. The low level control operates in joint space and is meant as a last resort tool to perform fine adjustments (e.g. release a trapped limb). Finally, our Pilot Interface is adaptable to different tasks, strategies and pilot’s needs, thanks to a modular architecture of the system which enables to add/remove single front-end components (e.g. GUI widgets) as well as back-end control modules on the fly

    3D MODELLING AND DESIGNING OF DEXTO:EKA:

    Get PDF
    The presented paper is concerned with designing of a low-cost, easy to use, intuitive interface for the control of a slave anthropomorphic teleo- operated robot. Tele-operator “masters”, that operate in real-time with the robot, have ranged from simple motion capture devices, to more complex force reflective exoskeletal masters. Our general design approach has been to begin with the definition of desired objective behaviours, rather than the use of available components with their predefined technical specifications. With the technical specifications of the components necessary to achieve the desired behaviours defined, the components are either acquired, or in most cases, developed and built. The control system, which includes the operation of feedback approaches, acting in collaboration with physical machinery, is then defined and implemented

    Human-Machine Interface for Remote Training of Robot Tasks

    Full text link
    Regardless of their industrial or research application, the streamlining of robot operations is limited by the proximity of experienced users to the actual hardware. Be it massive open online robotics courses, crowd-sourcing of robot task training, or remote research on massive robot farms for machine learning, the need to create an apt remote Human-Machine Interface is quite prevalent. The paper at hand proposes a novel solution to the programming/training of remote robots employing an intuitive and accurate user-interface which offers all the benefits of working with real robots without imposing delays and inefficiency. The system includes: a vision-based 3D hand detection and gesture recognition subsystem, a simulated digital twin of a robot as visual feedback, and the "remote" robot learning/executing trajectories using dynamic motion primitives. Our results indicate that the system is a promising solution to the problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and Techniques - IST201
    • …
    corecore