18 research outputs found

    Young Researchers ’ Views on the Current and Future State of HRI ABSTRACT

    No full text
    This paper presents the results of a panel discussion titled “The Future of HRI, ” held during an NSF workshop for graduate students on human-robot interaction in August 2006. The panel divided the workshop into groups tasked with inventing models of the field, and then asked these groups their opinions on the future of the field. In general, the workshop participants shared the belief that HRI can and should be seen as a single scientific discipline, despite the fact that it encompasses a variety of beliefs, methods, and philosophies drawn from several “core ” disciplines in traditional areas of study. HRI researchers share many interrelated goals, participants felt, and enhancing the lines of communication between different areas would help speed up progress in the field. Common concerns included the unavailability of common robust platforms, the emphasis on human perception over robot perception, and the paucity of longitudinal real-world studies. The authors point to the current lack of consensus on research paradigms and platforms to argue that the field is not yet in the phase that philosopher Thomas Kuhn would call “normal science, ” but believe the field shows signs of approaching that phase

    Comparing a Computer Agent with a Humanoid Robot

    No full text
    HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples ’ responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people’s social interactions with an agent and a robot, we experimentally compared people’s responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots

    Evaluating the effectiveness of a tutorial dialogue system for self-explanation

    No full text
    Abstract. Previous research has shown that self-explanation can be supported effectively in an intelligent tutoring system by simple means such as menus. We now focus on the hypothesis that natural language dialogue is an even more effective way to support self-explanation. We have developed the Geometry Explanation Tutor, which helps students to state explanations of their problemsolving steps in their own words. In a classroom study involving 71 advanced students, we found that students who explained problem-solving steps in a dialogue with the tutor did not learn better overall than students who explained by means of a menu, but did learn better to state explanations. Second, examining a subset of 700 student explanations, students who received higherquality feedback from the system made greater progress in their dialogues and learned more, providing some measure of confidence that progress is a useful intermediate variable to guide further system development. Finally, students who tended to reference specific problem elements in their explanations, rather than state a general problem-solving principle, had lower learning gains than other students. Such explanations may be indicative of an earlier developmental level.

    Effects of adaptive robot dialogue on information exchange and social relations

    No full text
    Human-robot interaction could be improved by designing robots that engage in adaptive dialogue with users. An adaptive robot could estimate the information needs of individuals and change its dialogue to suit these needs. We test the value of adaptive robot dialogue by experimentally comparing the effects of adaptation versus no adaptation on information exchange and social relations. In Experiment 1, a robot chef adapted to novices by providing detailed explanations of cooking tools; doing so improved information exchange for novice participants but did not influence experts. Experiment 2 added incentives for speed and accuracy and replicated the results from Experiment 1 with respect to information exchange. When the robot’s dialogue was adapted for expert knowledge (names of tools rather than explanations), expert participants found the robot to be more effective, more authoritative, and less patronizing. This work suggests adaptation in human-robot interaction has consequences for both task performance and social cohesion. It also suggests that people may be more sensitive to social relations with robots when under task or time pressure
    corecore