3,149 research outputs found

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data

    Perceptual lateralisation of audio-visual stimuli.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN014766 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    The Effect of Increased Immersion and its Relationship with Presence on Episodic Memory Performance within Virtual Reality Environments

    Get PDF
    Although virtual reality (VR) represents a promising tool for psychological research, much remains unknown about how different properties of VR environments may affect episodic memory performance. Two closely related characteristics of VR are immersion (i.e., the objective degree to which VR naturalistically portrays a facsimile of an analogous real-world environment) and presence (i.e., the subjective sense of being “mentally transported” to the virtual world). While some research has demonstrated a benefit of increased immersion on VR-based learning, it is uncertain how broadly and consistently this benefit might extend to individual components of immersion in isolation. Additionally, it is unclear whether immersion itself is what influences memory performance in these instances, or if presence is a mediating factor which explains the relationship between immersion and memory. To address these issues, the current study assessed how presence and memory were affected by three individual manipulations of immersion: field of view, unimodal (visual only) vs. bimodal (visual and auditory) environments, and the realism of lighting effects (e.g., the occurrence or absence of shadows). Results varied between different manipulations of immersion, suggesting that outcomes of one property of immersion may not be representative of immersion in general. However, no evidence for a mediating effect of presence emerged in any of the individual experiments, nor in a combined inter-experimental analysis. This outcome demonstrates a degree of independence between immersion and presence with regard to their influence on episodic memory performance, and further implies that presence may not be a critical factor in memory in VR.Doctor of Philosoph

    Gesture-speech coupling in L2 lexical stress production: A pre-registration of a speech acoustic and gesture kinematic study

    Get PDF
    The prosody of a second language (L2) is notoriously difficult to acquire. It requires the mastery of a range of nested multimodal systems, including articulatory but also gestural signals, as hand gestures are produced in close synchrony with spoken prosody. It remains unclear how easily the articulatory and gestural systems acquire new prosodic patterns in the L2 and how the two systems interact, especially when L1 patterns interfere. This interdisciplinary pre-registered study investigates how Dutch learners of Spanish produce multimodal lexical stress in Spanish-Dutch cognates (e.g., Spanish profeSOR vs. Dutch proFESsor). Acoustic analyses assess whether gesturing helps L2 speakers to place stress on the correct syllable; and whether gesturing boosts the acoustic correlates of stress through biomechanic coupling. Moreover, motion-tracking and time-series analyses test whether gesture-prosody synchrony is enhanced for stress-matching vs. stress-mismatching cognate pairs, perhaps revealing that gestural timing is biased in the L1 (or L2) direction (e.g., Spanish profeSOR with the gesture biased towards Dutch stressed syllable -fes). Thus, we will uncover how speakers deal with manual, articulatory, and cognitive constraints that need to be brought in harmony for efficient speech production, bearing implications for theories on gesture-speech interaction and multimodal L2 acquisition

    Multi-Level Audio-Visual Interactions in Speech and Language Perception

    Get PDF
    That we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing
    • 

    corecore