14 research outputs found

    Active vision-based localization for robots in a home-tour scenario

    Get PDF
    Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned

    Towards Tutoring an Interactive Robot

    Get PDF
    Wrede B, Rohlfing K, Spexard TP, Fritsch J. Towards tutoring an interactive robot. In: Hackel M, ed. Humanoid Robots, Human-like Machines. ARS; 2007: 601-612.Many classical approaches developed so far for learning in a human-robot interaction setting have focussed on rather low level motor learning by imitation. Some doubts, however, have been casted on whether with this approach higher level functioning will be achieved. Higher level processes include, for example, the cognitive capability to assign meaning to actions in order to learn from the tutor. Such capabilities involve that an agent not only needs to be able to mimic the motoric movement of the action performed by the tutor. Rather, it understands the constraints, the means and the goal(s) of an action in the course of its learning process. Further support for this hypothesis comes from parent-infant instructions where it has been observed that parents are very sensitive and adaptive tutors who modify their behavior to the cognitive needs of their infant. Based on these insights, we have started our research agenda on analyzing and modeling learning in a communicative situation by analyzing parent-infant instruction scenarios with automatic methods. Results confirm the well known observation that parents modify their behavior when interacting with their infant. We assume that these modifications do not only serve to keep the infant’s attention but do indeed help the infant to understand the actual goal of an action including relevant information such as constraints and means by enabling it to structure the action into smaller, meaningful chunks. We were able to determine first objective measurements from video as well as audio streams that can serve as cues for this information in order to facilitate learning of actions

    A Memory-based Software Integration for Development in Autonomous Robotics

    No full text
    Spexard TP, Siepmann F, Sagerer G. A Memory-based Software Integration for Development in Autonomous Robotics. In: International Conference on Intelligent Autonomous Systems. Baden-Baden, Germany; 2008: 49-53.Focusing the development of non-industrial robotics in the last decade the growing impact of service and entertainment robots for daily life has developed from pure science fiction to a serious scientific subject. Beginning with the first approaches of tour guide robots with poor cognitive abilities in museums or huge office building nowadays sociable robots operating in households are in sight of scientists. But still many questions in how to solve everyday tasks like laying the table or even ’’simpler’’ detecting objects in unstructured areas with varying lighting conditions are unsolved. Therefore the strong need to evaluate and exchange different approaches and abilities of multiple robotic demonstrators under real world conditions is also a crucial aspect in the development of system architectures. In this paper an architecture will be described providing strong support for simple exchange and integration of new robot abilities

    Human-oriented interaction with an anthropomorphic robot

    No full text
    A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g., human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human- human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the challenges resulting from these limits with respect to perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real time is presented. The tracking approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. To easily implement different interaction capabilities like deictic gestures, natural adaptive dialogs, and emotion awareness on the robot, we apply a modular integration approach utilizing XML-based data exchange. The paper focuses on our efforts to bring together different interaction concepts and perception capabilities integrated on a humanoid robot to achieve comprehending human-oriented interaction

    Make room for me - A spatial and situational movement concept in HRI

    No full text
    Peters A, Spexard TP, Weiß P, Hanheide M. Make room for me - A spatial and situational movement concept in HRI. In: Workshop on Behavior Monitoring and Interpretation - Well Being. 2009

    Oops, Something Is Wrong - Error Detection and Recovery for Advanced Human-Robot-Interaction

    Get PDF
    Spexard TP, Hanheide M, Li S, Wrede B. Oops, Something Is Wrong - Error Detection and Recovery for Advanced Human-Robot-Interaction. In: Proc. of the Workshop on Social Interaction with Intelligent Indoor Robots at the Int. Conf. on Robotics and Automation. 2008.A matter of course for the researchers and developers of state-of-the-art technology for human-computer- or human-robot-interaction is to create not only systems that can precisely fulfill a certain task. They must provide a strong robustness against internal and external errors or user-dependent application errors. Especially when creating service robots for a variety of applications or robots for accompanying humans in everyday situations sufficient error robustness is crucial for acceptance by users. But experience unveils that operating such systems under real world conditions with unexperienced users is an extremely challenging task which still is not solved satisfactorily. In this paper we will present an approach for handling both internal errors and application errors within an integrated system capable of performing extended HRI on different robotic platforms and in unspecified surroundings like a real world apartment. Based on the gathered experience from user studies and evaluating integrated systems in the real world, we implemented several ways to generalize and handle unexpected situations. Adding such a kind of error awareness to HRI systems in cooperation with the interaction partner avoids to get stuck in an unexpected situation or state and handle mode confusion. Instead of shouldering the enormous effort to account for all possible problems, this paper proposes a more general solution and underpins this with findings from naive user studies. This enhancement is crucial for the development of a new generation of robots as despite diligent preparations might be made, no one can predict how an interaction with a robotic system will develop and which kind of environment it has to cope with

    Human-like Person Tracking with an Anthropomorphic Robot

    No full text
    Spexard TP, Haasch A, Fritsch J, Sagerer G. Human-like Person Tracking with an Anthropomorphic Robot. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA). Orlando, Florida: IEEE; 2006: 1286-1292.A very important aspect in developing robots capable of human-robot interaction (HRI) is natural, human-like communication. Besides a flexible dialog system and speech understanding an anthropomorphic appearance has many advantages for intuitive usage and understanding of a robot. As a consequence of our effort in creating an anthropomorphic appearance and to come as close as possible to a human-human interaction, we decided to use human-like sensors, i.e., two cameras and two microphones only, not using a laser range finder or omnidirectional camera for tracking persons. Despite the challenge of a limited field of perception, a robust attention system for tracking and interacting with multiple persons simultaneously in real-time was created. Our approach is sufficiently generic to work on robots with varying hardware, as long as stereo audio data and images of a video camera are available. Since the architecture is designed modular with a XML based data exchange we are able to extend the robot’s abilities easily

    Hey robot, get out of my way: survey on a spatial and situational movement concept in HRI

    No full text
    Mobile robots are already applied in factories and hospitals, merely to do a distinct task. It is envisioned that robots assist in households soon. Those service robots will have to cope with several situations and tasks and of course with sophisticated human-robot interactions (HRI). Therefore, a robot has not only to consider social rules with respect to proxemics, it must detect in which (interaction) situation it is in and act accordingly. With respect to spatial HRI, we concentrate on the use of non-verbal communication. This chapter stresses the meaning of both, machine movements as signals towards a human and human body language. Considering these aspects will make interaction simpler and smoother. An observational study is presented to acquire a concept of spatial prompting by a robot and by a human. When a person and robot meet in a narrow hallway in order to pass by, they have to make room for each other. But how can a robot make sure that both really want to pass by instead of starting interaction? This especially concerns narrow, non-artificial surroundings. Which social signals are expected by the user and on the other side, can be generated or processed by a robot? The results will show what an appropriate passing behaviour is and how to distinguish between passage situations and others. The results shed light upon the readability of signals in spatial HRI
    corecore