369 research outputs found

    Considering the nature of multimodal language from a crosslinguistic perspective

    Get PDF
    Language in its primary face-to-face context is multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). Thus, understanding how expressions in the vocal and visual modalities together contribute to our notions of language structure, use, processing, and transmission (i.e., acquisition, evolution, emergence) in different languages and cultures should be a fundamental goal of language sciences. This requires a new framework of language that brings together how arbitrary and non-arbitrary and motivated semiotic resources of language relate to each other. Current commentary evaluates such a proposal by Murgiano et al (2021) from a crosslinguistic perspective taking variation as well as systematicity in multimodal utterances into account

    Learning to use demonstratives in conversation: What do language specific strategies in Turkish reveal?

    No full text
    Pragmatic development requires the ability to use linguistic forms, along with non-verbal cues, to focus an interlocutor's attention on a referent during conversation. We investigate the development of this ability by examining how the use of demonstratives is learned in Turkish, where a three-way demonstrative system (bu, su, o) obligatorily encodes both distance contrasts (i.e. proximal and distal) and absence or presence of the addressee's visual attention on the referent. A comparison of the demonstrative use by Turkish children (6 four- and 6 six-year-olds) and 6 adults during conversation shows that adultlike use of attention directing demonstrative, su, is not mastered even at the age of six, while the distance contrasts are learned earlier. This language specific development reveals that designing referential forms in consideration of recipient's attentional status during conversation is a pragmatic feat that takes more than six years to develop

    Cross-modal investigation of event component omissions in language development: A comparison of signing and speaking children

    Get PDF
    Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages

    Ostensive signals: markers of communicative relevance of gesture during demonstration to adults and children

    Get PDF
    Speakers adapt their speech and gestures in various ways for their audience. We investigated further whether they use ostensive signals (eye gaze, ostensive speech (e.g. like this, this) or a combination of both) in relation to their gestures when talking to different addressees, i.e., to another adult or a child in a multimodal demonstration task. While adults used more eye gaze towards their gestures with other adults than with children, they were more likely to use combined ostensive signals for children than for adults. Thus speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account

    Simultaneity as an emergent property of efficient communication in language: A comparison of silent gesture and sign language

    Get PDF
    Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities

    Demonstratives in context: Comparative handicrafts

    No full text
    Demonstratives (e.g., words such as this and that in English) pivot on relationships between the item being talked about, and features of the speech act situation (e.g., where the speaker and addressee are standing or looking). However, they are only rarely investigated multi-modally, in natural language contexts. This task is designed to build a video corpus of cross-linguistically comparable discourse data for the study of “deixis in action”, while simultaneously supporting the investigation of joint attention as a factor in speaker selection of demonstratives. In the task, two or more speakers are asked to discuss and evaluate a group of similar items (e.g., examples of local handicrafts, tools, produce) that are placed within a relatively defined space (e.g., on a table). The task can additionally provide material for comparison of pointing gesture practices

    Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements

    Get PDF
    Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers’ visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non- linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand

    Degree of language experience modulates visual attention to visible speech and iconic gestures during clear and degraded speech comprehension

    Get PDF
    Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners

    Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task

    Get PDF
    Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D- models from building blocks on six consecutive trials. We induced mutually shared knowledge by either showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a significant reduction in the number of words and of gestures when common ground was present. Additionally, we observed a reduction in semantic content and a shift in cross-modal distribution of information across trials. Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter- active language use of both younger and older adu
    • 

    corecore