102 research outputs found

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    Factors causing overspecification in definite descriptions

    Get PDF
    Speakers often overspecify their target descriptions and include more information than necessary for unique identification of the target referent. In the current paper, we study the production of definite target descriptions, and explore several factors that might influence the amount of information that is included in these descriptions. First, we present the results of a large-scale experiment investigating referential overspecification as a function of the properties of a target referent and the communicative setting. The results show that speakers (both in written and oral conditions) tend to provide more information when a target is plural rather than singular, and in domains where the speaker has more referential possibilities to describe the target. However, written and spoken referring expressions do not differ in terms of semantic redundancy. We conclude our paper by discussing the implications of our empirical findings for pragmatic theory and for language production models.peer-reviewe

    Children’s nonverbal displays of winning and losing:Effects of social and cultural contexts on smiles

    Get PDF
    We examined the effects of social and cultural contexts on smiles displayed by children during gameplay. Eight-year-old Dutch and Chinese children either played a game alone or teamed up to play in pairs. Activation and intensity of facial muscles corresponding to Action Unit (AU) 6 and AU 12 were coded according to Facial Action Coding System. Co-occurrence of activation of AU 6 and AU 12, suggesting the presence of a Duchenne smile, was more frequent among children who teamed up than among children who played alone. Analyses of the intensity of smiles revealed an interaction between social and cultural contexts. Whereas smiles, both Duchenne and non-Duchenne, displayed by Chinese children who teamed up were more intense than those displayed by Chinese children who played alone, the effect of sociality on smile intensity was not observed for Dutch children. These findings suggest that the production of smiles by children in a competitive context is susceptible to both social and cultural factors

    Attribute preference and priming in reference production : experimental evidence and computational modeling

    Get PDF
    Referring expressions (such as the red chair facing right) often show evidence of preferences (Pechmann, 1989; Belke & Meyer, 2002), with some attributes (e.g. colour) being more frequent and more often included when they are not required, leading to overspecified references. This observation underlies many computational models of Referring Expression Generation, especially those influenced by Dale & Reiter’s (1995) Incremental Algorithm. However, more recent work has shown that in interactive settings, priming can alter preferences. This paper provides further experimental evidence for these phenomena, and proposes a new computational model that incorporates both attribute preferences and priming effects. We show that the model provides an excellent match to human experimental data.peer-reviewe

    A new computational model of alignment and overspecification in reference

    Get PDF
    Models of reference production are influenced by findings that in visual domains, speakers tend to select attributes of a target referent based on their degree of salience or preference. Preferred attributes are often selected when they have no discriminatory value leading to overspecification.peer-reviewe

    Need I say more? On factors causing referential overspecification

    Get PDF
    We present the results of an elicitation experiment conducted to investigate which factors cause speakers to overspecify their referential expressions, where we hypothesized properties of the target and properties of the communicative setting to play a role. The results of this experiment show that speakers tend to provide more information when referring to a target in a more complex domain and when referring to plural targets. Moreover, written and spoken referring expressions do not differ in terms of redundancy, but do differ in terms of the number of words that they contain: speakers need more words to provide the same information as people who type their expressions.peer-reviewe

    How What We See and What We Know Influence Iconic Gesture Production

    Get PDF
    In face-to-face communication, speakers typically integrate information acquired through different sources, including what they see and what they know, into their communicative messages. In this study, we asked how these different input sources influence the frequency and type of iconic gestures produced by speakers during a communication task, under two degrees of task complexity. Specifically, we investigated whether speakers gestured differently when they had to describe an object presented to them as an image or as a written word (input modality) and, additionally, when they were allowed to explicitly name the object or not (task complexity). Our results show that speakers produced more gestures when they attended to a picture. Further, speakers more often gesturally depicted shape information when attended to an image, and they demonstrated the function of an object more often when they attended to a word. However, when we increased the complexity of the task by forbidding speakers to name the target objects, these patterns disappeared, suggesting that speakers may have strategically adapted their use of iconic strategies to better meet the task’s goals. Our study also revealed (independent) effects of object manipulability on the type of gestures produced by speakers and, in general, it highlighted a predominance of molding and handling gestures. These gestures may reflect stronger motoric and haptic simulations, lending support to activation-based gesture production accounts

    Rhythm in vocal emotional expressions: the normalized pairwise variability index differentiates emotions across languages

    No full text
    The voice is an important channel for emotional expression. Emotions are of- ten characterized by differences in pitch, loudness, the duration of segments, and spectral characteristics (Scherer, 2003). The rhythmic aspect of emotional speech has been largely neglected, most studies limit themselves to segment duration and speech rate. Since languages are often characterized by their rhythmic class (as either “stress timed” or “syllable timed”), we wanted to know whether the rhythmic structure plays a role in vocal emotional expres- sions. To characterize the rhythmic structure, we used the normalized pairwise variability index. The normalized pairwise variability index (nPVI) characterizes the rhythm of a language in a more continuous way (Grabe & Low, 2002). Quin- to, Thompson, and Keating (2013) found that the nPVI differentiated emotio- nal expressions from non-emotional ones. However, their study was limited to English (a stress timed language) and their nonsensical carrier sentence contai- ned real words, possibly influencing the role of speech rhythm. This contribu- tion investigates whether the nPVI can be used to characterize the possible rhythmic differences between emotions in a stress timed and a syllable timed language (Dutch and Korean). We do so by using an existing corpus (Goudbeek & Broersma, 2010) of eight posed emotional expressions (balanced for valance and arousal) by speakers of Dutch and Korean. The findings show, as expected, that Dutch and Korean differ in their nPVI, but, importantly, that the different emotions in the corpus also differ in their nPVI. Further analysis shows that emotional valence is an important contributor to these differences. Finally, the effects are different for Dutch and Korean, indicating the importance of studying different languages when investigating vocal emotional expression

    Saliency effects in distributional learning

    Get PDF
    Acquiring the sounds of a language involves learning to recognize distributional patterns present in the input. We show that among adult learners, this distributional learning of auditory categories (which are conceived of here as probability density functions in a multidimensional space) is constrained by the salience of the dimensions that form the axes of this perceptual space. Only with a particular ratio of variation in the perceptual dimensions was category learning driven by the distributional properties of the input
    • …
    corecore