14 research outputs found

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    Understanding facial communication. An approach to micro expression detection, Complying with Paul Ekman'S proposed guidelines

    Full text link
    [Study about nonverbal communication]Masson Carro, I. (2010). Understanding facial communication. An approach to micro expression detection, Complying with Paul Ekman'S proposed guidelines. Universitat Politècnica de València. http://hdl.handle.net/10251/149110Archivo delegad

    Imposing Cognitive Constraints on Reference Production:The Interplay Between Speech and Gesture During Grounding

    No full text
    Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be established. This study addresses the relationship between the verbal and gestural modalities during grounding in referential communication. We report data from a collaborative task where repeated references were elicited, and a time constraint was imposed to increase cognitive load. Our results reveal no differential effects of repetition or cognitive load on the semantic-based gesture rate, suggesting that representational gestures and speech are closely coordinated during grounding. However, gestures and speech differed in their execution, especially under time pressure. We argue that speech and gesture are two complementary streams that might be planned in conjunction but that unfold independently in later stages of language production, with speakers emphasizing the form of their gestures, but not of their words, to better meet the goals of the collaborative task. A collaborative referencing task is used to examine communication through both verbal and visual modalities in order to understand the cognitive constraints on multimodal grounding in conversation. Analyses of co-speech gesture show that while gesture and speech might be planned in conjunction, that they unfold independently in language production

    Can you handle this?:The impact of object affordances on how co-speech gestures are produced

    No full text
    Hand gestures are tightly coupled with speech and with action. Hence, recent accounts have emphasised the idea that simulations of spatio-motoric imagery underlie the production of co-speech gestures. In this study, we suggest that action simulations directly influence the iconic strategies used by speakers to translate aspects of their mental representations into gesture. Using a classic referential paradigm, we investigate how speakers respond gesturally to the affordances of objects, by comparing the effects of describing objects that afford action performance (such as tools) and those that do not, on gesture production. Our results suggest that affordances play a key role in determining the amount of representational (but not non-representational) gestures produced by speakers, and the techniques chosen to depict such objects. To our knowledge, this is the first study to systematically show a connection between object characteristics and representation techniques in spontaneous gesture production during the depiction of static referents

    Coming of age in gesture: A comparative study of gesturing and pantomiming in older children and adults

    Get PDF
    Research on the co-development of gestures and speech mainly focuses on children in early phases of language acquisition. This study investigates how children in later development use gestures to communicate, and whether the strategies they use are similar to adults’. Using a referential paradigm, we compared pantomimes and gestures produced by children (M=9) and adults, and found both groups to use gestures similarly when pantomiming, but differently in spontaneously produced gestures (in terms of frequency of gesturing, and of the representation techniques chosen to depict the objects). This suggests that older children have the necessary tools for full gestural expressivity, but when speech is available they rely less on gestures than adults, indicating both streams aren’t fully integrated yet

    What triggers a gesture? Exploring affordance compatibility effects in representational gesture production

    No full text
    What are the mechanisms responsible for spontaneous cospeech gesture production? Driven by the close connection between cospeech gestures and object-related actions, recent research suggests that cospeech gestures originate in perceptual and motoric simulations that occur while speakers process information for speaking (Hostetter & Alibali, 2008). Here, we test this claim by highlighting object affordances during a communication task, inspired by the classic stimulus-response compatibility paradigm by Tucker and Ellis (1998). We compared cospeech gestures in situations where target objects were oriented toward the speakers' dominant hand (grasping potential enhanced), with situations where they were oriented toward the nondominant hand. Before the main experiment, we conducted a replication attempt of Tucker and Ellis' (1998: Experiment 1) to (re)establish the effect of stimulus compatibility, using contemporary items. Contrary to expectations, we could not replicate the original findings. Furthermore, compatibly with our replication results, the gesture data showed that enhancing grasping potential did not increase the amount of cospeech gestures produced. Vertical orientation nevertheless did, with upright objects eliciting more cospeech gestures than inverted ones, which does suggest a relation between affordance and gesture production. Our results challenge the automaticity of affordance effects, both in a classic stimulus-response compatibility experiment as well as in a more interactive dialogue setting and suggest that previous findings on cospeech gestures emerge from thinking and communicating about action-evoking content rather than from the affordance-compatibility of the presented objects. (PsycInfo Database Record (c) 2020 APA, all rights reserved)

    What Triggers a Gesture

    No full text
    A replication and extension of Tucker and Ellis' (1998) study on action compatibility effect

    CASA MILA (Cultural and Social Aspects of Multimodal Interaction on Language Acquisition) data

    No full text
    Annotated and analysed data from observations carried out within the CASA MILA project. The observations concerned natural multimodal interactions of infants (aged 13-18 months) from three different cultural communities (in the Netherlands, rural Mozambique and urban Mozambique). Focus of the analysis concerned verbal and non-verbal communication in three cultural learning environments
    corecore