15,706 research outputs found

    Sensory modality of input influences encoding of motion events in speech but not co-speech gestures

    Get PDF
    Visual and auditory channels have different affordances and this is mirrored in what information is available for linguistic encoding. The visual channel has high spatial acuity, whereas the auditory channel has better temporal acuity. These differences may lead to different conceptualizations of events and affect multimodal language production. Previous studies of motion events typically present visual input to elicit speech and gesture. The present study compared events presented as audio- only, visual-only, or multimodal (visual+audio) input and assessed speech and co-speech gesture for path and manner of motion in Turkish. Speakers with audio-only input mentioned path more and manner less in verbal descriptions, compared to speakers who had visual input. There was no difference in the type or frequency of gestures across conditions, and gestures were dominated by path-only gestures. This suggests that input modality influences speakers’ encoding of path and manner of motion events in speech, but not in co-speech gestures

    The Effect of Coordinated Movement on Infant Vocalizations

    Get PDF

    The Time Is at Hand: The Development of Spatial Representations of Time in Children?s Speech and Gesture

    Get PDF
    Children achieve increasingly complex language milestones initially in gesture before they do so in speech. In this study, we ask whether gesture continues to be part of the language-learning process as children develop more abstract language skills, namely metaphors. More specifically, we focus on spatial metaphors for time and ask whether developmental changes in children’s production of such metaphors in speech also become evident in gesture and what cognitive and linguistic factors contribute to these changes. To answer these questions, we analyzed the speech and gestures produced by three groups of children (ages 3-4, 5-6, and 7-8)—all learning English as first language—as they talked about past and future events, along with adult native speakers of English. Here we asked how early we see change in the orientation (sagittal vs. lateral), directionality (left-to-right, right-to-left, backward, or forward) and congruency with speech (lateral gestures with Time-RP language and sagittal gestures with Ego-RP language). Further, we asked how comprehension of metaphors for time and literacy level would influence these changes. We found developmental changes in the orientation, directionality, and congruency of children’s gestures about time. We found that children’s gestures about time change in orientation (sagittal vs. lateral), in that children increase their use of lateral gestures with age and that this increase is influenced by their literacy level. Further, the directionality (left-to-right, right-to-left, forward, backward) of children’s gestures changes with age. For sagittal gestures we found that children that understood metaphor for time were more likely to produce sagittal gestures that placed the past behind and the future ahead. For lateral gestures, we found that children with higher levels of literacy were more likely to use lateral gestures that place the past to the left and the future to the right. Finally the congruency of children’s gesture with their speech changed. The older children were more likely to pair lateral gestures with Time-RP language than Ego-RP language

    Multimodal-first or pantomime-first?

    Get PDF
    A persistent controversy in language evolution research has been whether language emerged in the gestural-visual or in the vocal-auditory modality. A “dialectic” solution to this age-old debate has now been gaining ground: language was fully multimodal from the start, and remains so to this day. In this paper, we show this solution to be too simplistic and outline a more specific theoretical proposal, which we designate as pantomime-first. To decide between the multimodal-first and pantomime-first alternatives, we review several lines of interdisciplinary evidence and complement it with a cognitive-semiotic experiment. In the study, the participants saw – and then matched to hand-drawn images – recordings of short transitive events enacted by 4 actors in two conditions: visual (only body movement), and multimodal (body movement accompanied by nonlinguistic vocalization). Significantly, the matching accuracy was greater in the visual than the multimodal condition, though a follow-up experiment revealed that the emotional profiles of the events enacted in the multimodal condition could be reliably detected from the sound alone. We see these results as supporting the proposed pantomime-first scenari

    Information packaging in speech shapes information packaging in gesture : the role of speech planning units in the coordination of speech-gesture production

    Get PDF
    Linguistic encoding influences the gestural manner and path depiction of motion events. Gestures depict manner and path of motion events differently across languages, either conflating or separating manner and path, depending on whether manner and path are linguistically encoded within one clause (e.g., “rolling down”) or multiple clauses (e.g., “descends as it rolls”) respectively. However, it is unclear whether such gestural differences are affected by how speech packages information into planning units or by the way information is lexicalised (as verb plus particle or as two verbs). In two experiments, we manipulated the linguistic encoding of motion events in either one or two planning units while lexicalisation patterns were kept constant (i.e., verb plus particle). It was found that separating manner (verb) and path (particle) into different planning units also increased gestural manner and path separation. Thus, lexicalisation patterns do not drive gestural depiction of motion events. Rather gestures are shaped online by how speakers package information into planning units in speech production

    French Face-to-Face Interaction: Repetition as a Multimodal Resource

    Get PDF
    International audienceIn this chapter, after presenting the corpus as well as some of theannotations developed in the OTIM project, we then focus on the specificphenomenon of repetition. After briefly discussing this notion, we showthat different degrees of convergence can be achieved by speakersdepending on the multimodal complexity of the repetition and on thetiming in between the repeated element and the model. Although we focusmore specifically on the gestural level, we present a multimodal analysis ofgestural repetitions in which we met several issues linked to multimodalannotations of any type. This gives an overview of crucial issues in crosslevellinguistic annotation, such as the definition of a phenomenonincluding formal and/or functional categorization

    Cues to lying may be deceptive:Speaker and listener behaviour in an interactive game of deception

    Get PDF
    Are the cues that speakers produce when lying the same cues that listeners attend to when attempting to detect deceit? We used a two-person interactive game to explore the production and perception of speech and nonverbal cues to lying. In each game turn, participants viewed pairs of images, with the location of some treasure indicated to the speaker but not to the listener. The speaker described the location of the treasure, with the objective of misleading the listener about its true location; the listener attempted to locate the treasure, based on their judgement of the speaker’s veracity. In line with previous comprehension research, listeners’ responses suggest that they attend primarily to behaviours associated with increased mental difficulty, perhaps because lying, under a cognitive hypothesis, is thought to cause an increased cognitive load. Moreover, a mouse-tracking analysis suggests that these judgements are made quickly, while the speakers’ utterances are still unfolding. However, there is a surprising mismatch between listeners and speakers: When producing false statements, speakers are less likely to produce the cues that listeners associate with lying. This production pattern is in keeping with an attempted control hypothesis, whereby liars may take into account listeners’ expectations and correspondingly manipulate their behaviour to avoid detection

    Systematic mappings between semantic categories and types of iconic representations in the manual modality:A normed database of silent gesture

    Get PDF
    Contains fulltext : 212953.pdf (publisher's version ) (Open Access)An unprecedented number of empirical studies have shown that iconic gestures - those that mimic the sensorimotor attributes of a referent - contribute significantly to language acquisition, perception, and processing. However, there has been a lack of normed studies describing generalizable principles in gesture production and in comprehension of the mappings of different types of iconic strategies (i.e., modes of representation; MĂŒller, 2013). In Study 1 we elicited silent gestures in order to explore the implementation of different types of iconic representation (i.e., acting, representing, drawing, and personification) to express concepts across five semantic domains. In Study 2 we investigated the degree of meaning transparency (i.e., iconicity ratings) of the gestures elicited in Study 1. We found systematicity in the gestural forms of 109 concepts across all participants, with different types of iconicity aligning with specific semantic domains: Acting was favored for actions and manipulable objects, drawing for nonmanipulable objects, and personification for animate entities. Interpretation of gesture-meaning transparency was modulated by the interaction between mode of representation and semantic domain, with some couplings being more transparent than others: Acting yielded higher ratings for actions, representing for object-related concepts, personification for animate entities, and drawing for nonmanipulable entities. This study provides mapping principles that may extend to all forms of manual communication (gesture and sign). This database includes a list of the most systematic silent gestures in the group of participants, a notation of the form of each gesture based on four features (hand configuration, orientation, placement, and movement), each gesture's mode of representation, iconicity ratings, and professionally filmed videos that can be used for experimental and clinical endeavors.17 p

    Teacher gesture in a post-secondary English as a second language classroom: A sociocultural approach

    Full text link
    Vygotsky (1978) uses the example of gesture in a child, stating that finger pointing represents an interpersonal relationship, and only after this cultural form is internalized can an intrapersonal relationship develop. Language learning must be viewed in the context of social interaction, and the gesture of others, specifically language instructors toward their students, is a form of social interaction worthy of attention. Newman and Holzman (1993) discuss the idea of performance as a mode of semiotic mediation related to meaning making. Daniels, Cole, and Wertsch (2007) also discuss the concept of performance, stating that gestures are tools which assist performance. Wells (1999) adds performance to Vygotsky’s modes of semiotic mediation when discussing learning and teaching within the ZPD, considering these sources of assistance to learners in the ZPD. This study examined the discourse and corresponding gestures used in the classroom by one female instructor and her students in a university ESL pronunciation course. Specifically, the observations are of the teacher in interaction with students concerning the subject matter. The instructor and students were video recorded for the first five weeks of an eight-week course, meeting twice per week for one hour. The findings are discussed in relation to the instructor’s embodied practices. The data revealed that the instructor gestured and mimetically illustrated in order to concretize the language. In addition, her performance included nearly constant instantiations of language in terms of gesture. The gestures observed are organized into the different linguistic categories of grammar, pronunciation, and lexis. In addition, gestures related to classroom management are described. This organization reinforces the notion that the instructor was trying to concretize the language and codify it. Gestures in this study are considered in relation to pedagogy. Therefore, not only the gesture types, but also the functions, are discussed
    • 

    corecore