32 research outputs found
Affect and Inference in Bayesian Knowledge Tracing with a Robot Tutor
In this paper, we present work to construct a robotic tutoring system that can assess student knowledge in real time during an educational interaction. Like a good human teacher, the robot draws on multimodal data sources to infer whether students have mastered language skills. Specifically, the model extends the standard Bayesian Knowledge Tracing algorithm to incorporate an estimate of the student's affective state (whether he/she is confused, bored, engaged, smiling, etc.) in order to predict future educational performance. We propose research to answer two questions: First, does augmenting the model with affective information improve the computational quality of inference? Second, do humans display more prominent affective signals in an interaction with a robot, compared to a screen-based agent? By answering these questions, this work has the potential to provide both algorithmic and human-centered motivations for further development of robotic systems that tightly integrate affect understanding and complex models of inference with interactive, educational robots.National Science Foundation (U.S.) (Grant CCF-1138986)National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374
Transparency, teleoperation, and children's understanding of social robots
Teleoperation or Wizard-of-Oz control of social robots is commonly used in human-robot interaction (HRI) research. This is especially true for child-robot interactions, where technologies like speech recognition (which can help create autonomous interactions for adults) work less well. We propose to study young children's understanding teleoperation, how they conceptualize social robots in a learning context, and how this affects their interactions. Children will be told about the teleoperator's presence either before or after an interaction with a social robot. We will assess children's behavior, learning, and emotions before, during, and after the interaction. Our goal is to learn whether children's knowledge about the teleoperator matters (e.g., for their trust and for learning outcomes), and if so, how and when it matters most (e.g. at what age)
Expressive social exchange between humans and robots
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 253-264).Sociable humanoid robots are natural and intuitive for people to communicate with and to teach. We present recent advances in building an autonomous humanoid robot, Kismet, that can engage humans in expressive social interaction. We outline a set of design issues and a framework that we have found to be of particular importance for sociable robots. Having a human-in-the-loop places significant social constraints on how the robot aesthetically appears, how its sensors are configured, its quality of movement, and its behavior. Inspired by infant social development, psychology, ethology, and evolutionary perspectives, this work integrates theories and concepts from these diverse viewpoints to enable Kismet to enter into natural and intuitive social interaction with a human caregiver, reminiscent of parent-infant exchanges. Kismet perceives a variety of natural social cues from visual and auditory channels, and delivers social signals to people through gaze direction, facial expression, body posture, and vocalizations. We present the implementation of Kismet's social competencies and evaluate each with respect to: 1) the ability of naive subjects to read and interpret the robot's social cues, 2) the robot's ability to perceive and appropriately respond to naturally offered social cues, 3) the robot's ability to elicit interaction scenarios that afford rich learning potential, and 4) how this produces a rich, flexible, dynamic interaction that is physical, affective, and social. Numerous studies with naive human subjects are described that provide the data upon which we base our evaluations.by Cynthia L. Breazeal.Sc.D
Fostering parent–child dialog through automated discussion suggestions
The development of early literacy skills has been critically linked to a child’s later academic success. In particular, repeated studies have shown that reading aloud to children and providing opportunities for them to discuss the stories that they hear is of utmost importance to later academic success. CloudPrimer is a tablet-based interactive reading primer that aims to foster early literacy skills by supporting parents in shared reading with their children through user-targeted discussion topic suggestions. The tablet application records discussions between parents and children as they read a story and, in combination with a common sense knowledge base, leverages this information to produce suggestions. Because of the unique challenges presented by our application, the suggestion generation method relies on a novel topic modeling method that is based on semantic graph topology. We conducted a user study in which we compared how delivering suggestions generated by our approach compares to expert-crafted suggestions. Our results show that our system can successfully improve engagement and parent–child reading practices in the absence of a literacy expert’s tutoring.National Science Foundation (U.S.) (Award Number 1117584
Young Children Treat Robots as Informants
Children ranging from 3 to 5 years were introduced to two anthropomorphic robots that provided them with information about unfamiliar animals. Children treated the robots as interlocutors. They supplied information to the robots and retained what the robots told them. Children also treated the robots as informants from whom they could seek information. Consistent with studies of children's early sensitivity to an interlocutor's non-verbal signals, children were especially attentive and receptive to whichever robot displayed the greater non-verbal contingency. Such selective information seeking is consistent with recent findings showing that although young children learn from others, they are selective with respect to the informants that they question or endorse
Flat vs. Expressive Storytelling: Young Children’s Learning and Retention of a Social Robot’s Narrative
Prior research with preschool children has established that dialogic or active book reading is an effective method for expanding young children’s vocabulary. In this exploratory study, we asked whether similar benefits are observed when a robot engages in dialogic reading with preschoolers. Given the established effectiveness of active reading, we also asked whether this effectiveness was critically dependent on the expressive characteristics of the robot. For approximately half the children, the robot’s active reading was expressive; the robot’s voice included a wide range of intonation and emotion (Expressive). For the remaining children, the robot read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range (Flat). The robot’s movements were kept constant across conditions. We performed a verification study using Amazon Mechanical Turk (AMT) to confirm that the Expressive robot was viewed as significantly more expressive, more emotional, and less passive than the Flat robot. We invited 45 preschoolers with an average age of 5 years who were either English Language Learners (ELL), bilingual, or native English speakers to engage in the reading task with the robot. The robot narrated a story from a picture book, using active reading techniques and including a set of target vocabulary words in the narration. Children were post-tested on the vocabulary words and were also asked to retell the story to a puppet. A subset of 34 children performed a second story retelling 4–6 weeks later. Children reported liking and learning from the robot a similar amount in the Expressive and Flat conditions. However, as compared to children in the Flat condition, children in the Expressive condition were more concentrated and engaged as indexed by their facial expressions; they emulated the robot’s story more in their story retells; and they told longer stories during their delayed retelling. Furthermore, children who responded to the robot’s active reading questions were more likely to correctly identify the target vocabulary words in the Expressive condition than in the Flat condition. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot
Emotional design and human-robot interaction
Recent years have shown an increase in the importance of emotions applied to the Design field - Emotional Design. In this sense, the emotional design aims to elicit (e.g., pleasure) or prevent (e.g., displeasure) determined emotions, during human product interaction. That is, the emotional design regulates the emotional interaction between the individual and the product (e.g., robot). Robot design has been a growing area whereby robots are interacting directly with humans in which emotions are essential in the interaction. Therefore, this paper aims, through a non-systematic literature review, to explore the application of emotional design, particularly on Human-Robot Interaction. Robot design features (e.g., appearance, expressing emotions and spatial distance) that affect emotional design are introduced. The chapter ends with a discussion and a conclusion.info:eu-repo/semantics/acceptedVersio
Tega: A social robot
Tega is a new expressive “squash and stretch”, Android-based social robot platform, designed to enable long-term interactions with children
Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners
While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task. We report three main observations on how people administer feedback when teaching a Reinforcement Learning agent: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback, possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. Given this, we made specific modifications to the simulated RL robot, and analyzed and evaluated its learning behavior in four follow-up experiments with human trainers. We report significant improvements on several learning measures. This work demonstrates the importance of understanding the human-teacher/robot-learner partnership in order to design algorithms that support how people want to teach and simultaneously improve the robot’s learning behavior
Asymmetric Interpretations of Positive and Negative Human Feedback for a Social Learning Agent
Abstract — The ability for people to interact with robots and teach them new skills will be crucial to the successful application of robots in everyday human environments. In order to design agents that learn efficiently and effectively from their instruction, it is important to understand how people, that are not experts in Machine Learning or robotics, will try to teach social robots. In prior work we have shown that human trainers use positive and negative feedback differentially when interacting with a Reinforcement Learning agent. In this paper we present experiments and implementations on two platforms, a robotic and a computer game platform, that explore the multiple communicative intents of positive and negative feedback from a human partner, in particular that negative feedback is both about the past and about intentions for future action. I