41 research outputs found

    Integrating Natural Language and Gesture in a Robotics Domain

    No full text
    Human-computer interfaces facilitate communication, assist in the exchange of information, process commands and controls, among many additional interactions. For our work in the robotics domain, we have concentrated on integrating spoken natural language and natural gesture for command and control of a semiautonomous mobile robot. We have assumed that both spoken natural language and natural gesture are more user-friendly means of interacting with a mobile robot, and from the human standpoint, such interactions are easier, given that the human is not required to learn additional interactions, but can rely on "natural" ways of communication. So-called "synthetic" methods, such as data gloves, require additional learning; however, this is not the case with natural language and natural gesture. We, therefore, rely on what is natural to both spoken language when it is used in conjunction with natural gestures for giving commands. Furthermore, we have been integrating these interactions wit..

    Using Spatial Language in a Human-Robot Dialog

    No full text
    ... to describe their environment, e.g., "There is a desk in front of me and a doorway behind it", and to issue directives, e.g., "Go around the desk and through the doorway." In our research, we have been investigating the use of spatial relationships to establish a natural communication mechanism between people and robots, in particular, for novice users. In this paper, the work on robot spatial relationships is combined with a multimodal robot interface developed at the Naval Research Lab. We show how linguistic spatial descriptions and other spatial information can be extracted from an evidence grid map and how this information can be used in a natural, human-robot dialog

    A Natural Interface and Unified Skills for a Mobile Robot

    No full text
    interact naturally without having to explicitly state or re-state each expected or desired action when an interruption occurs. We hope to extend goal tracking so that the mobile robot can complete semantically related goals which are not initially specified or which are unknown to the human at the time when the initial goal is instantiated. This natural interface is currently in use with a mo- This work was sponsored by the Office of Naval Research. bile robot. Navigation goals and locations are specified by speech and/or with natural gestures. Commands can be interrupted and subsequently completed with fragmentary utterances. To provide the basic underlying skills for navigating in previously unknown environments, we are working to create a mobile robot system that is robust and adaptive in rapidly changing environments. We view integration of these skills as a basic research issue, studying the combination of different, complementary capabilities. One principle tha

    Using a Natural Language and Gesture Interface for Unmanned Vehicles

    No full text
    Unmanned vehicles, such as mobile robots, must exhibit adjustable autonomy. They must be able to be self-sufficient when the situation warrants; however, as they interact with each other and with humans, they must exhibit an ability to dynamically adjust their independence or dependence as co-operative agents attempting to achieve some goal. This is what we mean by adjustable autonomy. We have been investigating various modes of communication that enhance a robot's capability to work interactively with other robots and with humans. Specifically, we have been investigating how natural language and gesture can provide a user-friendly interface to mobile robots. We have extended this initial work to include semantic and pragmatic procedures that allow humans and robots to act co-operatively, based on whether or not goals have been achieved by the various agents in the interaction. By processing commands that are either spoken or initiated by clicking buttons on a Personal Digital Assistan..

    Two Ingredients for My Dinner with R2D2: Integration and Adjustable Autonomy

    No full text
    While the tone of this paper is informal and tongue-incheek, we believe we raise two important issues in robotics and multi-modal interface research; namely, how crucial integration of multiple modes of communication are for adjustable autonomy, which in turn is crucial for having dinner with R2D2. Furthermore, we discuss how our multimodal interface to autonomous robots addresses these issues by tracking goals, allowing for both natural and mechanical modes of input, and how our robotic system adjusts itself to ensure that goals are achieved, despite interruptions. Introduction The following situation should sound familiar to most, if not all, of us. You received your monthly credit card statement, and you have a question about something on the bill. So, you call Customer Service. Once connected, you are asked to press or say a number based on your request. After listening to your various options, you hear the appropriate number to press for Customer Service, and so you eit..
    corecore