76 research outputs found

    Who am I talking with? A face memory for social robots

    Get PDF
    In order to provide personalized services and to develop human-like interaction capabilities robots need to rec- ognize their human partner. Face recognition has been studied in the past decade exhaustively in the context of security systems and with significant progress on huge datasets. However, these capabilities are not in focus when it comes to social interaction situations. Humans are able to remember people seen for a short moment in time and apply this knowledge directly in their engagement in conversation. In order to equip a robot with capabilities to recall human interlocutors and to provide user- aware services, we adopt human-human interaction schemes to propose a face memory on the basis of active appearance models integrated with the active memory architecture. This paper presents the concept of the interactive face memory, the applied recognition algorithms, and their embedding into the robot’s system architecture. Performance measures are discussed for general face databases as well as scenario-specific datasets

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction

    Get PDF
    Lohse M. Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction. Bielefeld (Germany): Bielefeld University; 2010.Social sciences are becoming increasingly important for robotics research as work goes on to enable service robots to interact with inexperienced users. This endeavor can only be successful if the robots learn to interpret the users' behavior reliably and, in turn, provide feedback for the users, which enables them to understand the robot. In order to achieve this goal, the thesis introduces an approach to describe the interaction situation as a dynamic construct with different levels of specificity. The situation concept is the starting point for a model which aims to explain the users' behavior. The second important component of the model is the expectations of the users with respect to the robot. Both the situation and the expectations are shown to be the main determinants of the users' behaviors. With this theoretical background in mind, the thesis examines interactions from a home tour scenario in which a human teaches a robot about rooms and objects within them. To analyze the human expectations and behaviors in this situation, two main novel methods have been developed. In particular, a quantitative method for the analysis of the users' behavior repertoires (speech, gesture, eye gaze, body orientation, etc.) is introduced. The approach focuses on the interaction level, which describes the interplay between the robot and the user. In the second novel method, also the system level is taken into account, which includes the robot components and their interplay. This method serves for a detailed task analysis and helps to identify problems that occur in the interaction. By applying these methods, the thesis contributes to the identification of underlying expectations that allow future behavior of the users to be predicted in particular situations. Knowledge about the users' behavior repertoires serves as a cue for the robot about the state of the interaction and the task the users aim to accomplish. Therefore, it enables robot developers to adapt the interaction models of the components to the situation, actual user expectations, and behaviors. The work provides a deeper understanding of the role of expectations in human-robot interaction and contributes to the interaction and system design of interactive robots

    Modeling Human-Robot-Interaction based on generic Interaction Patterns

    Get PDF
    Peltason J. Modeling Human-Robot-Interaction based on generic Interaction Patterns. Bielefeld: Bielefeld University; 2014

    The robot's vista space : a computational 3D scene analysis

    Get PDF
    Swadzba A. The robot's vista space : a computational 3D scene analysis. Bielefeld (Germany): Bielefeld University; 2011.The space that can be explored quickly from a fixed view point without locomotion is known as the vista space. In indoor environments single rooms and room parts follow this definition. The vista space plays an important role in situations with agent-agent interaction as it is the directly surrounding environment in which the interaction takes place. A collaborative interaction of the partners in and with the environment requires that both partners know where they are, what spatial structures they are talking about, and what scene elements they are going to manipulate. This thesis focuses on the analysis of a robot's vista space. Mechanisms for extracting relevant spatial information are developed which enable the robot to recognize in which place it is, to detect the scene elements the human partner is talking about, and to segment scene structures the human is changing. These abilities are addressed by the proposed holistic, aligned, and articulated modeling approach. For a smooth human-robot interaction, the computed models should be aligned to the partner's representations. Therefore, the design of the computational models is based on the combination of psychological results from studies on human scene perception with basic physical properties of the perceived scene and the perception itself. The holistic modeling realizes a categorization of room percepts based on the observed 3D spatial layout. Room layouts have room type specific features and fMRI studies have shown that some of the human brain areas being active in scene recognition are sensitive to the 3D geometry of a room. With the aligned modeling, the robot is able to extract the hierarchical scene representation underlying a scene description given by a human tutor. Furthermore, it is able to ground the inferred scene elements in its own visual perception of the scene. This modeling follows the assumption that cognition and language schematize the world in the same way. This is visible in the fact that a scene depiction mainly consists of relations between an object and its supporting structure or between objects located on the same supporting structure. Last, the articulated modeling equips the robot with a methodology for articulated scene part extraction and fast background learning under short and disturbed observation conditions typical for human-robot interaction scenarios. Articulated scene parts are detected model-less by observing scene changes caused by their manipulation. Change detection and background learning are closely coupled because change is defined phenomenologically as variation of structure. This means that change detection involves a comparison of currently visible structures with a representation in memory. In range sensing this comparison can be nicely implement as subtraction of these two representations. The three modeling approaches enable the robot to enrich its visual perceptions of the surrounding environment, the vista space, with semantic information about meaningful spatial structures useful for further interaction with the environment and the human partner

    How People’s Perception on Degree of Control Influences Human-Robot Interaction

    Get PDF
    Automated products that seem to be more sophisticated every day are invading the market. Gmail provides suggestions for emails responses and can even track important dates through emails and send a notification about it without the user's permission. As robot companions are just slowly starting to be available to the public, one must wonder, do people expect robots to have the same technology advancements as other technology tools such as smart phones? Is it really what people want? Some early research on control has been made in the Human Computer Interaction community by Shneiderman & Maes (1997) to discover how much control the user is ready to give up to an intelligent agent. This PhD does the same type of investigations for domestic robots by focussing on perception of control in Human-Robot Interaction (HRI). To be able to conduct such an investigation, the user's perception of control is measured through the robot's level of autonomy. As this thesis will show, little research has been done in this area for domestic robot companions. After a first exploratory study was conducted to gain a better understanding of perception of control related to the user's preferred level of autonomy of the robot for a simple task (cleaning), three questionnaire studies have investigated what makes a task high critical or low critical, physical or cognitive. The results could then be used to design a full live investigation on how the level of criticality of a task influence the user's preference of the robot's level of autonomy. The results of this thesis show that in general people want robots to be more autonomous but they still want to have control over the robot for most tasks. People prefer to give instructions to the robot when a cognitive task is performed regardless of the criticality of the task, and for a low critical physical task that is entertainment-based. However, for a high critical physical task, the user prefers the robot to be fully autonomous even if they feel they have less control over the robot. This is explained by the way participants perceived the performance of the task. When the robot was fully autonomous, they felt the task was done faster and smoother than when they had to continuously provide instructions to the robot

    Data-driven fault detection for component based robotic systems

    Get PDF
    Golombek R. Data-driven fault detection for component based robotic systems. Bielefeld: Universität Bielefeld; 2014.Advancements in the field of robotics enable the creation of systems with cognitive abilities which are capable of close interaction with humans in real world scenarios. These systems may take over jobs previously executed by humans like house cleaning and cooking or they can be supportive and act as a helper for elderly people. One consequence of this progress is the increased need for dependable and fault tolerant behavior of today’s robotic systems because they share the same spaces with humans and operate in close proximity to them. Unreliable and faulty behavior may frustrate users or even endanger them resulting in poor acceptance of robotic systems. The contribution of this thesis is a fault detection approach called AuCom. Fault detection is a basis element for fault tolerant system behavior which is the ability of a system to autonomously cope with occurring faults while it is engaged in interaction. The approach is designed to tackle the specific needs of cognitive robotic systems which feature a component based hardware and software structure and are characterized by frequent changes due to research and development efforts as well as uncertain and variant behavior resulting from the interaction in real world environments. The solution presented in this thesis belongs to the class of data-driven fault detection approaches. This class of approaches assumes that fault relevant information can be directly derived from data gathered in the robotic system. The data exploited in this work for fault detection is the communication between the system’s components. This communication is represented with features which are common to all elements of the communication (i.e., they are generic). Furthermore, the approach assumes that the current element of the communication can be estimated from the history of the system’s communication and that a deviation from the expected estimate indicates a fault. This assumption is encoded in the model in terms of a novel representation of the communication as a time-series of temporal dynamic features. A concrete integration of the approach into a real system is exemplified on our robotic platform BIRON. In addition, exemplary integration solutions for robotic frameworks currently prominent in literature are discussed in this thesis. The actual capability of the approach to report faults is evaluated for several artificial systems in simulation and on BIRON in an off-line and on-line manner. The performance is compared to a histogram-based baseline approach

    Teaching robot’s proactive behavior using human assistance

    Get PDF
    The final publication is available at link.springer.comIn recent years, there has been a growing interest in enabling autonomous social robots to interact with people. However, many questions remain unresolved regarding the social capabilities robots should have in order to perform this interaction in an ever more natural manner. In this paper, we tackle this problem through a comprehensive study of various topics involved in the interaction between a mobile robot and untrained human volunteers for a variety of tasks. In particular, this work presents a framework that enables the robot to proactively approach people and establish friendly interaction. To this end, we provided the robot with several perception and action skills, such as that of detecting people, planning an approach and communicating the intention to initiate a conversation while expressing an emotional status.We also introduce an interactive learning system that uses the person’s volunteered assistance to incrementally improve the robot’s perception skills. As a proof of concept, we focus on the particular task of online face learning and recognition. We conducted real-life experiments with our Tibi robot to validate the framework during the interaction process. Within this study, several surveys and user studies have been realized to reveal the social acceptability of the robot within the context of different tasks.Peer ReviewedPostprint (author's final draft

    HUMAN ROBOT INTERACTION THROUGH SEMANTIC INTEGRATION OF MULTIPLE MODALITIES, DIALOG MANAGEMENT, AND CONTEXTS

    Get PDF
    The hypothesis for this research is that applying the Human Computer Interaction (HCI) concepts of using multiple modalities, dialog management, context, and semantics to Human Robot Interaction (HRI) will improve the performance of Instruction Based Learning (IBL) compared to only using speech. We tested the hypothesis by simulating a domestic robot that can be taught to clean a house using a multi-modal interface. We used a method of semantically integrating the inputs from multiple modalities and contexts that multiplies a confidence score for each input by a Fusion Weight, sums the products, and then uses the input with the highest product sum. We developed an algorithm for determining the Fusion Weights. We concluded that different modalities, contexts, and modes of dialog management impact human robot interaction; however, which combination is better depends on the importance of the accuracy of learning what is taught versus the succinctness of the dialog between the user and the robot
    corecore