539 research outputs found

    Spectators’ aesthetic experiences of sound and movement in dance performance

    Get PDF
    In this paper we present a study of spectators’ aesthetic experiences of sound and movement in live dance performance. A multidisciplinary team comprising a choreographer, neuroscientists and qualitative researchers investigated the effects of different sound scores on dance spectators. What would be the impact of auditory stimulation on kinesthetic experience and/or aesthetic appreciation of the dance? What would be the effect of removing music altogether, so that spectators watched dance while hearing only the performers’ breathing and footfalls? We investigated audience experience through qualitative research, using post-performance focus groups, while a separately conducted functional brain imaging (fMRI) study measured the synchrony in brain activity across spectators when they watched dance with sound or breathing only. When audiences watched dance accompanied by music the fMRI data revealed evidence of greater intersubject synchronisation in a brain region consistent with complex auditory processing. The audience research found that some spectators derived pleasure from finding convergences between two complex stimuli (dance and music). The removal of music and the resulting audibility of the performers’ breathing had a significant impact on spectators’ aesthetic experience. The fMRI analysis showed increased synchronisation among observers, suggesting greater influence of the body when interpreting the dance stimuli. The audience research found evidence of similar corporeally focused experience. The paper discusses possible connections between the findings of our different approaches, and considers the implications of this study for interdisciplinary research collaborations between arts and sciences

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data

    Evaluating the impact of variation in automatically generated embodied object descriptions

    Get PDF
    Institute for Communicating and Collaborative SystemsThe primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences

    Max-Planck-Institute for Psycholinguistics: Annual Report 2003

    Get PDF

    Advice on the use of gestures in presentation skills manuals : alignment between theory-research and instruction

    Get PDF
    There appears to be a weak alignment between manuals on using hand gestures in oral presentations, theoretical sources on gesture production, and empirical studies on dimensions of gesture processing and use. Much of the advice in presentation skills manuals centre on prohibitions regarding undesirable postures and gestures. Furthermore, these sources tend to focus on the intentions, feelings and mental states of the speakers as well as the psychological effect of gestures on the audience. Theoretical sources, on the other hand, typically emphasise the relationship between speech and gestures, and the mental processing of the latter, especially representational gestures. Quasi-experimental empirical research studies, in turn, favour the description and analysis of iconic and metaphorical gestures, often with specific reference to gesturing in the retelling of cartoon narratives. The purpose of this article is to identify main areas of misalignment between practical, theoretical and empirical sources, and provide pointers on how the advice literature could align guidelines on gesture use with theory and research. First, I provide an overview of pertinent gesture theories, followed by a discussion of partially canonised typologies that describe gestures in relation to semiotic gesture types, handedness (left, right or both hands), salient hand shapes and palm orientation, movement, and position in gesture space. Subsequently, I share the results of a qualitative analysis of the advice on gesture use in 17 manuals on presentation skills. I then report on an analysis of the co-speech gestures in a corpus of 17 video-recorded audio-visual presentations by students of Theology. The article is concluded by proposing an outline for advice on gestures that is based on a considered integration of traditional advice in guide books and websites, theory, and empirical research.http://www.imageandtext.up.ac.za/imageandtextpm2020Unit for Academic Literac

    Multi-modal response generation.

    Get PDF
    Wong Ka Ho.Thesis submitted in: October 2005.Thesis (M.Phil.)--Chinese University of Hong Kong, 2006.Includes bibliographical references (leaves 163-170).Abstracts in English and Chinese.Abstract --- p.2Acknowledgements --- p.5Chapter 1 --- Introduction --- p.10Chapter 1.1 --- Multi-modal and Multi-media --- p.10Chapter 1.2 --- Overview --- p.11Chapter 1.3 --- Thesis Goal --- p.13Chapter 1.4 --- Thesis Outline --- p.15Chapter 2 --- Background --- p.16Chapter 2.1 --- Multi-modal Fission --- p.17Chapter 2.2 --- Multi-modal Data collection --- p.21Chapter 2.2.1 --- Collection Time --- p.21Chapter 2.2.2 --- Annotation and Tools --- p.21Chapter 2.2.3 --- Knowledge of Multi-modal Using --- p.21Chapter 2.3 --- Text-to-audiovisual Speech System --- p.22Chapter 2.3.1 --- Different. Approaches to Generate a Talking Heading --- p.23Chapter 2.3.2 --- Sub-tasks in Animating a Talking Head --- p.25Chapter 2.4 --- Modality Selection --- p.27Chapter 2.4.1 --- Rules-based approach --- p.27Chapter 2.4.2 --- Plan-based approach --- p.28Chapter 2.4.3 --- Feature-based approach --- p.29Chapter 2.4.4 --- Corpus-based approach --- p.30Chapter 2.5 --- Summary --- p.30Chapter 3 --- Information Domain --- p.31Chapter 3.1 --- Multi-media Information --- p.31Chapter 3.2 --- "Task Goals, Dialog Acts, Concepts and Information Type" --- p.32Chapter 3.2.1 --- Task Goals and Dialog Acts --- p.32Chapter 3.2.2 --- Concepts and Information Type --- p.36Chapter 3.3 --- User's Task and Scenario --- p.37Chapter 3.4 --- Chapter Summary --- p.38Chapter 4 --- Multi-modal Response Data Collection --- p.41Chapter 4.1 --- Data Collection Setup --- p.42Chapter 4.1.1 --- Multi-modal Input Setup --- p.43Chapter 4.1.2 --- Multi-modal Output Setup --- p.43Chapter 4.2 --- Procedure --- p.45Chapter 4.2.1 --- Precaution --- p.45Chapter 4.2.2 --- Recording --- p.50Chapter 4.2.3 --- Data Size and Type --- p.50Chapter 4.3 --- Annotation --- p.52Chapter 4.3.1 --- Extensible Multi-Modal Markup Language --- p.52Chapter 4.3.2 --- "Mobile, Multi-biometric and Multi-modal Annotation" --- p.53Chapter 4.4 --- Problems in the Wizard-of-Oz Setup --- p.56Chapter 4.4.1 --- Lack of Knowledge --- p.57Chapter 4.4.2 --- Time Deficiency --- p.57Chapter 4.4.3 --- Information Availability --- p.58Chapter 4.4.4 --- Operation Delay --- p.59Chapter 4.4.5 --- Lack of Modalities --- p.59Chapter 4.5 --- Data Optimization --- p.61Chapter 4.5.1 --- Precaution --- p.61Chapter 4.5.2 --- Procedures --- p.61Chapter 4.5.3 --- Data Size in Expert Design Responses --- p.63Chapter 4.6 --- Analysis and Discussion --- p.65Chapter 4.6.1 --- Multi-modal Usage --- p.67Chapter 4.6.2 --- Modality Combination --- p.67Chapter 4.6.3 --- Deictic term --- p.68Chapter 4.6.4 --- Task Goal and Dialog Acts --- p.71Chapter 4.6.5 --- Information Type --- p.72Chapter 4.7 --- Chapter Summary --- p.74Chapter 5 --- Text-to-Audiovisual Speech System --- p.76Chapter 5.1 --- Phonemes and Visemes --- p.77Chapter 5.2 --- Three-dimensional Facial Animation --- p.82Chapter 5.2.1 --- Three-dimensional (3D) Face Model --- p.82Chapter 5.2.2 --- The Blending Process for Animation --- p.84Chapter 5.2.3 --- Connectivity between Visemes --- p.85Chapter 5.3 --- User Perception Experiments --- p.87Chapter 5.4 --- Applications and Extension --- p.89Chapter 5.4.1 --- Multilingual Extension and Potential Applications --- p.89Chapter 5.5 --- Talking Head in Multi-modal Dialogue System --- p.90Chapter 5.5.1 --- Prosody --- p.93Chapter 5.5.2 --- Body Gesture --- p.94Chapter 5.6 --- Chapter Summary --- p.94Chapter 6 --- Modality Selection and Implementation --- p.98Chapter 6.1 --- Multi-modal Response Examples --- p.98Chapter 6.1.1 --- Single Concept-value Example --- p.99Chapter 6.1.2 --- Two Concept-values with Different Information Types --- p.102Chapter 6.1.3 --- Multiple Concept-values with Same Information Types Example --- p.103Chapter 6.2 --- Heuristic Rules for Modality Selection --- p.105Chapter 6.2.1 --- General Principles --- p.106Chapter 6.2.2 --- Heuristic rules --- p.107Chapter 6.2.3 --- Temporal Coordination for Synchronization --- p.109Chapter 6.2.4 --- Physical Layout --- p.110Chapter 6.2.5 --- Deictic Term --- p.111Chapter 6.2.6 --- Example --- p.111Chapter 6.3 --- Spoken Content Generation --- p.113Chapter 6.4 --- Chapter Summary --- p.115Chapter 7 --- Conclusions and Future Work --- p.117Chapter 7.1 --- Summary --- p.117Chapter 7.2 --- Contributions --- p.118Chapter 7.3 --- Future work --- p.119Chapter A --- XML Schema for M3 Markup Language --- p.123Chapter B --- M3ML Examples --- p.128Chapter C --- Domain-Specific Task Goals in the Hong Kong Tourism Do- main --- p.131Chapter D --- Dialog Acts for User Request in the Hong Kong Tourism Do- main --- p.133Chapter E --- Dialog Acts for System Response in the Hong Kong Tourism Domain --- p.137Chapter F --- Information Type and Concepts --- p.141Chapter G --- Concepts --- p.143Bibliography --- p.14

    Gestures and Lexical Access Problems in German as Second Language

    Get PDF
    Màster de Lingüística Aplicada i Adquisició de Llengües en Contextos Multilingües, Departament de Filologia Anglesa i Alemanya, Universitat de Barcelona, Curs: 2015, Tutora: Marta Fernandez-VillanuevaGestures receive growing attention in the field of Second Language Acquisition but still there is a scarcity of research that looks at them as a part of multimodal communication through the use of interactional approach. The present study aims to explore the interplay of gestures in oral production in German as second language and the lexical access problems. It looks at the principal gesture functions in communication (referential, discursive, interactional, autostimulative) and adapts the NEUROGES typology by Lausberg and Sloetjes (2009) who distinguish gestures that depict image, conventional, emotional, pointing, emphatic, autostimulative gestures The purpose of the study is to see what kind of gestures occurs with the lexical access problems in German SL oral communication and if any gestural types depend on L2 proficiency and fluency. To answer these research questions the speech of 6 Spanish/Catalan (L1) students of German (L2) was analyzed. The participants varied in their proficiency (intermediate, upper-intermediate, advanced levels) and fluency. The data were taken from the VARCOM Corpus of the University of Barcelona. There were the videotaped dialogues between the students of German and German native speakers who participated in communication and were instructed to prompt the information from their interviewees during the argumentation task. During the analysis the cases with lexical access problems in speech, the involved lexical items (abstract or concrete) and the hand gestures were identified and coded in the ELAN annotator device. The study reflects on the tendencies in the gestural trajectory and dynamics, the start-time of gestures regarding to the target word (before, together, after it) and the principal communicative functions of gestures at the moments of word-searches in speech..

    Speakers adapt gestures to addressees' knowledge: Implications for models of co-speech gesture.

    Get PDF
    Are gesturing and speaking shaped by similar communicative constraints? In an experiment, we teased apart communicative from cognitive constraints upon multiple dimensions of speech-accompanying gestures in spontaneous dialogue. Typically, speakers attenuate old, repeated or predictable information but not new information. Our study distinguished what was new or old for speakers from what was new or old for (and shared with) addressees. In 20 groups of 3 naive participants, speakers retold the same Road Runner cartoon story twice to one addressee and once to another. We compared the distribution of gesture types, and the gestures’ size and iconic precision across retellings. Speakers gestured less frequently in stories retold to Old Addressees than New Addressees. Moreover, the gestures they produced in stories retold to Old Addressees were smaller and less precise than those retold to New Addressees, although these were attenuated over time as well. Consistent with our previous findings about speaking, gesturing is guided by both speaker-based (cognitive) and addressee-based (communicative) constraints that affect both planning and motoric execution. We discuss the implications for models of co-speech gesture production
    • …
    corecore