12 research outputs found
Recommended from our members
Hand gesture reflects visual and motor features from multiple memory systems
Speakers gestures provide a visual-motor representation from memory of what is being communicated. Yet the cognitiveand neural contributions to gesture form remain unknown. To examine this, we investigated how prior experience wasreflected in gesture in three groups: healthy adults, hippocampal-amnesic patients with declarative memory impairment,and brain-damaged comparisons. Participants completed a computerized TOH with differing visual/motor experience(visual curved disk trajectory/button-pressing; no visual disk trajectory/curved mouse-movements). After a 30-min delaywhen amnesic patients did not explicitly remember completing the TOH participants explained how to do the TOH. Weanalyzed the form of the gestures produced. Comparison participants and amnesic patients gestured in systematicallydifferent ways based on their prior visual and motor experiences. Thus, gesture reflects visual and motor features fromrepresentations in multiple memory systems
Recommended from our members
Examining the role of the motor system in the beneficial effect of speaker’sgestures during encoding and retrieval
Co-speech hand gesture facilitates learning and memory, yet little is known about the underlying mechanisms. Ian andBucciarelli (2017) investigated this: participants watched videos of a person producing sentences with or without concur-rent hand gestures. In one experiment, participants hands were occupied with an unrelated motor task while watching.Gesture enhanced memory for sentences except when hands were engaged in the motor task, indicating motor systeminvolvement when gesture enhances memory. We investigated when and how the motor system is engaged in service ofmemory. We replicated the above design and cued listeners at retrieval with the same or different manipulation they expe-rienced at encoding (gesture/motor task). We predict that participants in the same motor task condition for encoding andretrieval will have better recall performance than those in mismatch conditions, suggesting that re-engaging or simulatingprevious motor experiences is critical in the relationship between gesture and memory
Evidence of Audience Design in Amnesia: Adaptation in Gesture but Not Speech
Speakers design communication for their audience, providing more information in both speech and gesture when their listener is naïve to the topic. We test whether the hippocampal declarative memory system contributes to multimodal audience design. The hippocampus, while traditionally linked to episodic and relational memory, has also been linked to the ability to imagine the mental states of others and use language flexibly. We examined the speech and gesture use of four patients with hippocampal amnesia when describing how to complete everyday tasks (e.g., how to tie a shoe) to an imagined child listener and an adult listener. Although patients with amnesia did not increase their total number of words and instructional steps for the child listener, they did produce representational gestures at significantly higher rates for the imagined child compared to the adult listener. They also gestured at similar frequencies to neurotypical peers, suggesting that hand gesture can be a meaningful communicative resource, even in the case of severe declarative memory impairment. We discuss the contributions of multiple memory systems to multimodal audience design and the potential of gesture to act as a window into the social cognitive processes of individuals with neurologic disorders
Recommended from our members
Taxonomy Builder: A Data-driven and User-centric Tool for Streamlining Taxonomy Construction
An existing domain taxonomy for normalizing content is often assumed when discussing approaches to information extraction, yet often in real-world scenarios there is none. When one does exist, as the information needs shift, it must be continually extended. This is a slow and tedious task, and one that does not scale well. Here we propose an interactive tool that allows a taxonomy to be built or extended rapidly and with a human in the loop to control precision. We apply insights from text summarization and information extraction to reduce the search space dramatically, then leverage modern pretrained language models to perform contextualized clustering of the remaining concepts to yield candidate nodes for the user to review. We show this allows a user to consider as many as 200 taxonomy concept candidates an hour to quickly build or extend a taxonomy to better fit information needs. © 2022 Association for Computational Linguistics.Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]