157 research outputs found

    Assessing knowledge conveyed in gesture: Do teachers have the upper hand?

    Get PDF
    Children's gestures can reveal important information about their problem-solving strategies. This study investigated whether the information children express only in gesture is accessible to adults not trained in gesture coding. Twenty teachers and 20 undergraduates viewed videotaped vignettes of 12 children explaining their solutions to equations. Six children expressed the same strategy in speech and gesture, and 6 expressed different strategies. After each vignette, adults described the child's reasoning. For children who expressed different strategies in speech and gesture, both teachers and undergraduates frequently described strategies that children had not expressed in speech. These additional strategies could often be traced to the children's gestures. Sensitivity to gesture was comparable for teachers and undergraduates. Thus, even without training, adults glean information, not only from children's words but also from their hands

    Gesture analysis for physics education researchers

    Full text link
    Systematic observations of student gestures can not only fill in gaps in students' verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.Comment: 14 page

    Spatial Encoding Strategy Theory: The Relationship between Spatial Skill and STEM Achievement

    Get PDF
    Learners’ spatial skill is a reliable and significant predictor of achievement in STEM, including computing, education. Spatial skill is also malleable, meaning it can be improved through training. Most cognitive skill training improves performance on only a narrow set of similar tasks, but researchers have found ample evidence that spatial training can broadly improve STEM achievement. We do not yet know the cognitive mechanisms that make spatial skill training broadly transferable when other cognitive training is not, but understanding these mechanisms is important for developing training and instruction that consistently benefits learners, especially those starting with low spatial skill. This paper proposes the spatial encoding strategy (SpES) theory to explain the cognitive mechanisms connecting spatial skill and STEM achievement. To motivate SpES theory, the paper reviews research from STEM education, learning sciences, and psychology. SpES theory provides compelling post hoc explanations for the findings from this literature and aligns with neuroscience models about the functions of brain structures. The paper concludes with a plan for testing the theory’s validity and using it to inform future research and instruction. The paper focuses on implications for computing education, but the transferability of spatial skill to STEM performance makes the proposed theory relevant to many education communities

    Designing 'Embodied' Science Learning Experiences for Young Children

    Get PDF
    Research in embodied cognition emphasises the importance of meaningful ‘bodily’ experience, or congruent action, in learning and development. This highlights the need for evidence-based design guidelines for sensorimotor interactions that meaningfully exploit action-based experiences, that are instrumental in shaping the way we conceptualise the world. These sensorimotor experiences are particularly important for young children as they can provide them with an embodied toolkit of resources (independent of language skills or subject specific vocabulary) that they can draw upon to support science ‘think’ and ‘talk’, using their own bodies to develop and express ideas through gesture, that are grounded on sensorimotoric representations from action experiences. Taking an iterative design-based research (DBR) approach, this paper reports the design, development and deployment of a programme of outdoor activities for children aged 4–6 years, that drew on embodied cognition theory to foster meaningful action in relation to ideas of air resistance. This research is relevant to researchers, practitioners and designers. It makes a contribution to learning experience design by making explicit the process of applying key components of embodied cognition theory to the design of science learning activities for early years, and how this can effectively inform digital design

    Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications

    Get PDF
    Lücking A, Bergmann K, Hahn F, Kopp S, Rieser H. Data-based analysis of speech and gesture: the Bielefeld Speech and Gesture Alignment corpus (SaGA) and its applications. Journal on Multimodal User Interfaces. 2013;7(1-2):5-18.Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on

    Tense and aspect in word problems about motion: diagram, gesture, and the felt experience of time

    Get PDF
    © 2014, Mathematics Education Research Group of Australasia, Inc. Word problems about motion contain various conjugated verb forms. As students and teachers grapple with such word problems, they jointly operationalize diagrams, gestures, and language. Drawing on findings from a 3-year research project examining the social semiotics of classroom interaction, we show how teachers and students use gesture and diagram to make sense of complex verb forms in such word problems. We focus on the grammatical category of “aspect” for how it broadens the concept of verb tense. Aspect conveys duration and completion or frequency of an event. The aspect of a verb defines its temporal flow (or lack thereof) and the location of a vantage point for making sense of this durational process

    Telerobotic Pointing Gestures Shape Human Spatial Cognition

    Full text link
    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure

    Pointing to visible and invisible targets

    Get PDF
    We investigated how the visibility of targets influenced the type of point used to provide directions. In Study 1 we asked 605 passersby in three localities for directions to well-known local landmarks. When that landmark was in plain view behind the requester, most respondents pointed with their index fingers, and few respondents pointed more than once. In contrast, when the landmark was not in view, respondents pointed initially with their index fingers, but often elaborated with a whole-hand point. In Study 2, we covertly filmed the responses from 157 passersby we approached for directions, capturing both verbal and gestural responses. As in Study 1, few respondents produced more than one gesture when the target was in plain view and initial points were most likely to be index finger points. Thus, in a Western geographical context in which pointing with the index finger is the dominant form of pointing, a slight change in circumstances elicited a preference for pointing with the whole hand when it was the second or third manual gesture in a sequence

    Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures

    Get PDF
    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain
    corecore