54,989 research outputs found
Multimodal Grammar Implementation
This paper reports on an implementation of a multimodal grammar of speech and co-speech gesture within the LKB/PET grammar engineering environment. The implementation extends the English Resource Grammar (ERG, Flickinger (2000)) with HPSG types and rules that capture the form of the linguistic signal, the form of the gestural signal and their relative timing to constrain the meaning of the multimodal action. The grammar yields a single parse tree that integrates the spoken and gestural modality thereby drawing on standard semantic composition techniques to derive the multimodal meaning representation. Using the current machinery, the main challenge for the grammar engineer is the nonlinear input: the modalities can overlap temporally. We capture this by identical speech and gesture token edges. Further, the semantic contribution of gestures is encoded by lexical rules transforming a speech phrase into a multimodal entity of conjoined spoken and gestural semantics.
Recommended from our members
What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with Conduction Aphasia
Cross-linguistic evidence suggests that language typology influences how people gesture when using âmanner-of-motionâ verbs (Kita 2000; Kita & ĂzyĂŒrek 2003) and that this is due to âonlineâ lexical and syntactic choices made at the time of speaking (Kita, ĂzyĂŒrek, Allen, Brown, Furman & Ishizuka, 2007). This paper attempts to relate these findings to the co-speech iconic gesture used by an English speaker with conduction aphasia (LT) and five controls describing a Sylvester and Tweety1 cartoon. LT produced co-speech gesture which showed distinct patterns which we relate to different aspects of her language impairment, and the lexical and syntactic choices she made during her narrative
âShow me, how does it look nowâ: Remote Help-giving in Collaborative Design
This paper examines the role of visual information in a remote help-giving situation involving the collaborative physical task of designing a prototype remote control. We analyze a set of video recordings captured within an experimental setting. Our analysis shows that using gestures and relevant artefacts and by projecting activities on the camera, participants were able to discuss several design-related issues. The results indicate that with a limited camera view (mainly faces and shoulders), participantsâ conversations were centered at the physical prototype that they were designing. The socially organized use of our experimental setting provides some key implications for designing future remote collaborative systems
Recommended from our members
Gesture production and comprehension in children with specific language impairment
Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups
The role of beat gesture and pitch accent in semantic processing : An ERP study
Peer reviewedPublisher PD
How Do Gestures Influence Thinking and Speaking? The Gesture-for-Conceptualization Hypothesis.
Peer reviewedPostprin
Detecting Emotional Involvement in Professional News Reporters: An Analysis of Speech and Gestures
This study is aimed to investigate the extent to which reporters\u2019 voice and body behaviour may betray different degrees of emotional involvement when reporting on emergency situations. The hypothesis is that emotional involvement is associated with an increase in body movements and pitch and intensity variation. The object of investigation is a corpus of 21 10-second videos of Italian news reports on flooding taken from Italian nation-wide TV channels. The gestures and body movements of the reporters were first inspected visually. Then, measures of the reporters\u2019 pitch and intensity variations were calculated and related with the reporters' gestures. The effects of the variability in the reporters' voice and gestures were tested with an evaluation test. The results show that the reporters vary greatly in the extent to which they move their hands and body in their reportings. Two gestures seem to characterise reporters\u2019 communication of emergencies: beats and deictics. The reporters\u2019 use of gestures partially parallels the reporters\u2019 variations in pitch and intensity. The evaluation study shows that increased gesturing is associated with greater emotional involvement and less professionalism. The data was used to create an ontology of gestures for the communication of emergenc
Beat that Word : How Listeners Integrate Beat Gesture and Focus in Multimodal Speech Discourse
Peer reviewedPublisher PD
ANGELICA : choice of output modality in an embodied agent
The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality
- âŠ