402 research outputs found
A psychometric measure of working memory capacity for configured body movement.
Working memory (WM) models have traditionally assumed at least two domain-specific storage systems for verbal and visuo-spatial information. We review data that suggest the existence of an additional slave system devoted to the temporary storage of body movements, and present a novel instrument for its assessment: the movement span task. The movement span task assesses individuals' ability to remember and reproduce meaningless configurations of the body. During the encoding phase of a trial, participants watch short videos of meaningless movements presented in sets varying in size from one to five items. Immediately after encoding, they are prompted to reenact as many items as possible. The movement span task was administered to 90 participants along with standard tests of verbal WM, visuo-spatial WM, and a gesture classification test in which participants judged whether a speaker's gestures were congruent or incongruent with his accompanying speech. Performance on the gesture classification task was not related to standard measures of verbal or visuo-spatial working memory capacity, but was predicted by scores on the movement span task. Results suggest the movement span task can serve as an assessment of individual differences in WM capacity for body-centric information
Recommended from our members
From Gesture to Sign Language: Conventionalization of Classifier Constructions by Adult Hearing Learners of British Sign Language
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1–3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages
Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task
Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D- models from building blocks on six consecutive trials. We induced mutually shared knowledge by either showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a significant reduction in the number of words and of gestures when common ground was present. Additionally, we observed a reduction in semantic content and a shift in cross-modal distribution of information across trials. Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter- active language use of both younger and older adu
Getting the point: tracing worked examples enhances learning
Embodied cognition perspectives suggest that pointing and tracing with the index finger may support learning, with basic laboratory research indicating such gestures have considerable effects on information processing in working memory. The present thesis examined whether tracing worked examples could enhance learning through decreased intrinsic cognitive load. In Experiment 1, 56 Year 6 students (mean age = 11.20, SD = .44) were presented with either tracing or no-tracing instructions on parallel lines relationships. The tracing group solved more acquisition phase practice questions and made fewer test phase errors, but otherwise test results were limited by ceiling effects. 42 Year 5 students (mean age = 10.50, SD = .51) were recruited in Experiment 2 to better align the materials with students’ knowledge levels. The tracing group outperformed the non-tracing group at the test and reported lower levels of test difficulty, interpreted as lower levels of intrinsic cognitive load. Experiment 3 recruited 52 Year 6 and Year 7 students (mean age = 12.04, SD = .59) presented with materials on angle relationships of a triangle; the tracing effect was replicated on test scores and errors, but not test difficulty. Experiment 4 used the parallel lines materials to test hypothesized gradients across experimental conditions with 72 Year 5 students (mean age = 9.94, SD = .33), predicting the tracing on the paper group would outperform the tracing above the paper group, who in turn would outperform the non-tracing group. The hypothesized gradient was established across practice questions correctly answered, practice question errors, test questions correctly answered, test question time to solution, and test difficulty self-reports. The results establish that incorporating the haptic input into worked example-based instruction design enhances the worked example effect and that tracing worked examples is a natural, simple yet effective way to enhance novices’ mathematics learning
Toward a more embedded/extended perspective on the cognitive function of gestures
Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures
- …