12 research outputs found

    Attention-controlled acquisition of a qualitative scene model for mobile robots

    Get PDF
    Haasch A. Attention-controlled acquisition of a qualitative scene model for mobile robots. Bielefeld (Germany): Bielefeld University; 2007.Robots that are used to support humans in dangerous environments, e.g., in manufacture facilities, are established for decades. Now, a new generation of service robots is focus of current research and about to be introduced. These intelligent service robots are intended to support humans in everyday life. To achieve a most comfortable human-robot interaction with non-expert users it is, thus, imperative for the acceptance of such robots to provide interaction interfaces that we humans are accustomed to in comparison to human-human communication. Consequently, intuitive modalities like gestures or spontaneous speech are needed to teach the robot previously unknown objects and locations. Then, the robot can be entrusted with tasks like fetch-and-carry orders even without an extensive training of the user. In this context, this dissertation introduces the multimodal Object Attention System which offers a flexible integration of common interaction modalities in combination with state-of-the-art image and speech processing techniques from other research projects. To prove the feasibility of the approach the presented Object Attention System has successfully been integrated in different robotic hardware. In particular, the mobile robot BIRON and the anthropomorphic robot BARTHOC of the Applied Computer Science Group at Bielefeld University. Concluding, the aim of this work, to acquire a qualitative Scene Model by a modular component offering object attention mechanisms, has been successfully achieved as demonstrated on numerous occasions like reviews for the EU-integrated Project COGNIRON or demos

    A Multi-Modal Object Attention System for a Mobile Robot

    No full text
    Haasch A, Hofemann N, Fritsch J, Sagerer G. A Multi-Modal Object Attention System for a Mobile Robot. In: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. Edmonton, Alberta, Canada: IEEE; 2005: 1499-1504.Robot companions are intended for operation in private homes with naive users. For this purpose, they need to be endowed with natural interaction capabilities. Additionally, such robots will need to be taught unknown objects that are present in private homes. We present a multi-modal object attention system that is able to identify objects referenced by the user with gestures and verbal instructions. The proposed system can detect known and unknown objects and stores newly acquired object information in a scene model for later retrieval. This way, the growing knowledge base of the robot companion improves the interaction quality as the robot can more easily focus its attention on objects it has been taught previously

    Human-like Person Tracking with an Anthropomorphic Robot

    No full text
    Spexard TP, Haasch A, Fritsch J, Sagerer G. Human-like Person Tracking with an Anthropomorphic Robot. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA). Orlando, Florida: IEEE; 2006: 1286-1292.A very important aspect in developing robots capable of human-robot interaction (HRI) is natural, human-like communication. Besides a flexible dialog system and speech understanding an anthropomorphic appearance has many advantages for intuitive usage and understanding of a robot. As a consequence of our effort in creating an anthropomorphic appearance and to come as close as possible to a human-human interaction, we decided to use human-like sensors, i.e., two cameras and two microphones only, not using a laser range finder or omnidirectional camera for tracking persons. Despite the challenge of a limited field of perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real-time was created. Our approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. Since the architecture is designed modular with a XML based data exchange we are able to extend the robot’s abilities easily

    A Flexible Infrastructure for the Development of a Robot Companion with Extensible HRI-Capabilities

    No full text
    Fritsch J, Kleinehagenbrock M, Haasch A, Wrede S, Sagerer G. A Flexible Infrastructure for the Development of a Robot Companion with Extensible HRI-Capabilities. In: Proc. IEEE Int. Conf. on Robotics and Automation. Barcelona, Spain; 2005: 3419-3425.The development of robot companions with natural human-robot interaction (HRI) capabilities is a challenging task as it requires incorporating various functionalities. Consequently, a flexible infrastructure for controlling module operation and data exchange between modules is proposed, taking into account insights from software system integration. This is achieved by combining a three-layer control architecture containing a flexible control component with a powerful communication framework. The use of XML throughout the whole infrastructure facilitates ongoing evolutionary development of the robot companion’s capabilities

    Interactive Object Learning for Robot Companions using Mosaic Images

    No full text
    Möller B, Posch S, Haasch A, Fritsch J, Sagerer G. Interactive Object Learning for Robot Companions using Mosaic Images. In: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. Edmonton, Alberta, Canada: IEEE; 2005: 371-376.Natural human-robot interaction (HRI) is a key feature of mobile robot companions collaborating with humans. To achieve natural HRI, multiple communication modalities like vision, speech, and gestures have to be utilized. Besides, capabilities to emulate cognitive processes, e.g., object learning and object recognition, are essential. In this work we present a new approach to interactive object learning enabling multi-view object representation. To overcome a robot’s limitation of having only one view point, we make use of an iconic memory consisting of previously acquired images. As the relevant scene area is unknown during construction of the iconic memory, a representation in the form of mosaic images is applied. The relevant image patches describing an object referenced by the user are selected through an object attention mechanism. The resulting multi-view object representations improve the robustness and flexibility of our interactive approach for object learning

    Human-style interaction with a robot for cooperative learning of scene objects

    No full text
    Li S, Haasch A, Wrede B, Fritsch J, Sagerer G. Human-style interaction with a robot for cooperative learning of scene objects. In: Proc. Int. Conf. on Multimodal Interfaces. Trento, Italy: ACM Press; 2005: 151-158.In research on human-robot interaction the interest is currently shifting from uni-modal dialog systems to multi-modal interaction schemes. We present a system for human-style interaction with a robot that is integrated on our mobile robot BIRON. To model the dialog we adopt an extended grounding concept with a mechanism to handle multi-modal in- and output where object references are resolved by the interaction with an object attention system (OAS). The OAS integrates multiple input from, e.g., the object and gesture recognition systems and provides the information for a common representation. This representation can be accessed by both modules and combines symbolic verbal attributes with sensor-based features. We argue that such a representation is necessary to achieve a robust and efficient information processing

    Interacting with a mobile robot: Evaluating gestural object references

    No full text
    Abstract—Creating robots able to interact and cooperate with humans in household environments and everyday life is an emerging topic. Our goal is to facilitate a human-like and intuitive interaction with such robots. Besides verbal interaction, gestures are a fundamental aspect in human-human interaction. One typical usage of interactive gestures is referencing of objects. This paper describes a novel integrated vision system combining different algorithms for pose tracking, gesture detection, and object attention in order to enable a mobile robot to resolve gesture-based object references. Results from the evaluation of the individual algorithms as well as the overall system are presented. A total of 20 minutes of video data collected from four subjects performing almost 500 gestures are evaluated to demonstrate the current performance of the approach as well as the overall success rate of gestural object references. This demonstrates that our integrated vision system can serve as the gestural front end that enables an interactive mobile robot to engage in multimodal human-robot interaction. I

    Modality Integration and Dialog Management for a Robotic Assistant

    No full text
    Toptsis I, Haasch A, Hüwel S, Fritsch J, Fink GA. Modality Integration and Dialog Management for a Robotic Assistant. In: Proc. European Conf. on Speech Communication and Technology. Lisboa, Portugal; 2005.The communication with robotic assistants or companions is a challenging new domain for the use of dialog systems. In contrast to classical spoken language interfaces users interact with mobile robots mostly in a multi-modal way. In this paper we will present the integration of several modalities in the dialog system of BIRON — the Bielefeld Robot Companion. Besides speech as the main modality the system integrates deictic gestures and visual scene information in order to resolve object references in a task oriented dialog. We will present example interactions with BIRON and first qualitative results from the "home-tour" scenario defined within the COGNIRON project

    Interacting with a Mobile Robot: Evaluating Gestural Object References

    No full text
    Schmidt J, Hofemann N, Haasch A, Fritsch J, Sagerer G. Interacting with a Mobile Robot: Evaluating Gestural Object References. In: Institute of Electrical and Electronics Engineers, Nihon Robotto Gakkai, IEEE Robotics and Automation Society, eds. IROS 2008. Nice, France: IEEE; 2008
    corecore