10 research outputs found

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    Simulated Manual Interaction as the Conceptual Base for Reference and Predication: A Cognitive Grammar Analysis of the Integration Between Handling Gestures and Speech

    Get PDF
    Prior research on representational hand gestures has shown that an object’s affordances influence both the likelihood that it will be indexed in a representational gesture, and the form of the gesture used to refer to it. Objects which afford being held are associated with higher gesture rates than objects which do not afford being held. Further research has shown that the ways humans prototypically interact with an object also influence the reference technique used to refer to that object through a hand gesture. An object that people interact with manually will tend to be indexed through a gesture imitating the action associated with interacting with the object (called an acting gesture), while an object that people do not normally interact with manually will tend to be indexed through gestures depicting its shape (called molding and drawing gestures). Results from studies looking at neuroimaging and gesture production suggest that these differences in representation techniques are the result of the simulated action of interacting with the referent of the gesture. This aligns with Cognitive Grammar’s claim that an utterance’s profile is construed in relation to its conceptual base. Using data from narrations of the Pear Film, this study proposes a subtype of acting gesture—here termed handling gesture—and analyzes its various grammatical functions. It posits that the handling gesture is used to profile the various elements within a manual interaction event—which include an object that affords manual interaction, an agent, and the action the agent performs on the object. By applying theory from Cognitive Grammar and conceptual integration to an analysis of the handling gesture, this paper argues that handling gestures are used to construe physical objects as participants of manual interaction events and to establish an utterance’s schematic structure, which is elaborated by the speech

    Con la voz y las manos: gestos icónicos en interpretación simultánea

    Get PDF
    Desde una perspectiva corpórea de la cognición, los gestos representacionales se han descrito como creaciones espontáneas que emergen de la producción de imágenes mentales durante los procesos de construcción de significados. El objetivo de este trabajo es explorar el papel que desempeñan este tipo de gestos en los procesos de construcción de significados de los intérpretes simultáneos. Con este fin, se han estudiado las relaciones entre los gestos icónicos realizados espontáneamente por cuatro intérpretes en cabina y las imágenes mentales que recordaban haber producido durante la interpretación. Los resultados ofrecen indicios convergentes de una vinculación entre los gestos analizados y las imágenes mentales descritas por las participantes y permiten formular algunas hipótesis sobre el origen y las funciones de los gestos icónicos producidos por los intérpretes durante la interpretación simultánea

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    Co-speech gestures are a window into the effects of Parkinson’s disease on action representations

    Get PDF
    Parkinson’s disease impairs motor function and cognition, which together affect language and communication. Co-speech gestures are a form of language-related actions that provide imagistic depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but it is unknown whether gesture representations require the involvement of intact neural sensory and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described two scenes featuring actions which varied in their inherent degree of bodily motion. In addition to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we analysed how Parkinson’s patients represent manner (how something/someone moves) and path information (where something/someone moves to) in gesture, depending on the degree of bodily motion involved in the action depicted. We replicated an earlier finding that people with Parkinson’s disease are less likely to gesture about actions from a first-person perspective preferring instead to depict actions gesturally from a third-person perspective – and show that this effect is modulated by the degree of bodily motion in the actions being depicted. When describing high motion actions, the Parkinson’s group were specifically impaired in depicting manner information in gesture and their use of third-person path-only gestures was significantly increased. Gestures about low motion actions were relatively spared. These results inform our understanding of the neural and cognitive basis of gesture production by providing neuropsychological evidence that action gesture production relies on intact motor network function

    Production and comprehension of audience design behaviours in co-speech gesture

    Get PDF
    Speakers can use gesture to depict information during conversation (Kendon, 2004). The current thesis investigates how speakers can adjust their gestures to communicate more effectively to an addressee using gesture. Furthermore, the current thesis investigates the mechanisms behind audience design behaviours. Chapter 1 introduces the topics of gestures and audience design, and outlines the structure of the thesis. Chapter 2 explores the definition and classification of gestures, and provides a review of the literature on gesture production, gesture comprehension, and audience design. Chapter 3 investigates the mechanisms responsible for producing audience design behaviours, and the competing factors affecting gesture production. The findings suggest that speakers use cue-based heuristics to design communicative behaviours. Furthermore, the findings suggest that speakers value gesture more for communication when describing spatial stimuli than abstract stimuli. Chapter 4 further investigates the mechanisms responsible for producing audience design behaviours and the factors affecting gesture production. The findings suggest that speakers can both respond to cues from the addressee using heuristics and take the perspective of the addressee. Furthermore, we found no evidence to suggest that the effect of visibility was due to the confounding of visibility and addressee responsiveness. Chapter 5 investigates how foregrounding gestures can help the gestures convey information to the addressee. The findings do not provide unequivocal evidence that foregrounding benefits the addressee’s comprehension. However, trends in the data suggest that making gestures visually prominent or referring to the gesture in speech may help the gesture to convey information to the addressee. Chapter 6 discussed and interpreted the findings from the previous Chapters. It discusses the mechanisms responsible for audience design behaviours, the factors that affect gesture production, and the effect of gestural audience design behaviours on addressee comprehension. The chapter discusses my interpretations of the findings regarding the current literature and proposes further research
    corecore