63,716 research outputs found

    Application of a Natural Language Interface to the Teleoperation of a Mobile Robot

    Get PDF
    IFAC Intelligent Components for Vehicles, Seville, Spain, 1998This paper describes the application of a natural language interface to the teleoperation of a mobile robot. Natural language communication with robots is a major goal, since it allows for non expert people to communicate with robots in his or her own language. This communication has to be flexible enough to allow the user to control the robot with a minimum knowledge about its details. In order to do this, the user must be able to perform simple operations as well as high level tasks which involve multiple elements of the system. For this ones, an adequate representation of the knowledge about the robot and its environment will allow the creation of a plan of simple actions whose execution will result in the accomplishment of the requested tas

    Shared Autonomy via Hindsight Optimization

    Full text link
    In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly

    Plan recognition for space telerobotics

    Get PDF
    Current research on space telerobots has largely focused on two problem areas: executing remotely controlled actions (the tele part of telerobotics) or planning to execute them (the robot part). This work has largely ignored one of the key aspects of telerobots: the interaction between the machine and its operator. For this interaction to be felicitous, the machine must successfully understand what the operator is trying to accomplish with particular remote-controlled actions. Only with the understanding of the operator's purpose for performing these actions can the robot intelligently assist the operator, perhaps by warning of possible errors or taking over part of the task. There is a need for such an understanding in the telerobotics domain and an intelligent interface being developed in the chemical process design domain addresses the same issues

    Multimodal Signal Processing and Learning Aspects of Human-Robot Interaction for an Assistive Bathing Robot

    Full text link
    We explore new aspects of assistive living on smart human-robot interaction (HRI) that involve automatic recognition and online validation of speech and gestures in a natural interface, providing social features for HRI. We introduce a whole framework and resources of a real-life scenario for elderly subjects supported by an assistive bathing robot, addressing health and hygiene care issues. We contribute a new dataset and a suite of tools used for data acquisition and a state-of-the-art pipeline for multimodal learning within the framework of the I-Support bathing robot, with emphasis on audio and RGB-D visual streams. We consider privacy issues by evaluating the depth visual stream along with the RGB, using Kinect sensors. The audio-gestural recognition task on this new dataset yields up to 84.5%, while the online validation of the I-Support system on elderly users accomplishes up to 84% when the two modalities are fused together. The results are promising enough to support further research in the area of multimodal recognition for assistive social HRI, considering the difficulties of the specific task. Upon acceptance of the paper part of the data will be publicly available
    corecore