9 research outputs found
Guidelines for Designing Social Robots as Second Language Tutors
In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human–human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors
Recommended from our members
Differences in the gesture kinematics of blind, blindfolded, and sighted speakers
The role of gestures in cognition extends beyond communication as people gesture not only when they speak but also think. This also holds for individuals who are blind from birth. However, studies showed that blind speakers produce fewer spontaneous gestures than sighted speakers when describing events. The present study aims to go beyond quantitative measures and gain insight into gesture kinematics. We compared the duration, size, and speed of path gestures (showing the trajectory of a movement) used by 20 blind, 21 blindfolded, and 21 sighted Turkish speakers when describing spatial events. Blind speakers took more time to produce larger gestures than sighted speakers, but the speed of gestures did not differ. The gestures of blindfolded speakers did not differ from those of blind and sighted speakers in any of the measures. These suggest a lifetime of blindness influences the kinematics of gesture production beyond a temporary lack of vision
Lack of visual experience affects multimodal language production
Human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people’s descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people’s speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests visual experience affects how people express spatial events in multimodal language, and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences
Recommended from our members
Lack of visual experience influences silent gesture productions for concepts across semantic categories
The extent to which experience influences conceptual representations is a matter of ongoing debate. This pre-registered study tested whether lack of visual experience affects how concepts are mapped onto gestures. Theories claim gestures arise from sensorimotor simulations, reflecting gesturers’ experience with objects. This raises the question of whether sensory experience influences gesture forms for concepts. Thirty congenitally blind and 30 sighted Turkish speakers produced silent gestures for individual concepts from three semantic categories that rely on motor or visual experience to different extents. Blind gesturers were less likely than sighted gesturers to produce a gesture for visual concepts, but this was not the case for motor concepts. Their gestures were also quantitatively different than sighted people’s gestures, relying less on strategies depicting visual features—e.g., drawing. Thus, visual experience plays a key role in how concepts are depicted in gestures, in line with embodied theories of gesture and conceptual representation
Guidelines for designing social robots as second language tutors
In recent years, it has been suggested that social robots have potential as tutors and educators for both children and adults. While robots have been shown to be effective in teaching knowledge and skill-based topics, we wish to explore how social robots can be used to tutor a second language to young children. As language learning relies on situated, grounded and social learning, in which interaction and repeated practice are central, social robots hold promise as educational tools for supporting second language learning. This paper surveys the developmental psychology of second language learning and suggests an agenda to study how core concepts of second language learning can be taught by a social robot. It suggests guidelines for designing robot tutors based on observations of second language learning in human-human scenarios, various technical aspects and early studies regarding the effectiveness of social robots as second language tutors. (PsycINFO Database Record (c) 2018 APA, all rights reserved
Second Language Tutoring using Social Robots. A Large-Scale Study.
We present a large-scale study of a series of seven lessons designed to help young children learn English vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains