3,620 research outputs found

    Annotation and Classification of French Feedback Communicative Functions

    No full text
    International audienceFeedback utterances are among the most fre- quent in dialogue. Feedback is also a crucial aspect of all linguistic theories that take social interaction involving language into account. However, determining communicative func- tions is a notoriously difficult task both for human interpreters and systems. It involves an interpretative process that integrates vari- ous sources of information. Existing work on communicative function classification comes from either dialogue act tagging where it is generally coarse grained concerning the feed- back phenomena or it is token-based and does not address the variety of forms that feed- back utterances can take. This paper intro- duces an annotation framework, the dataset and the related annotation campaign (involv- ing 7 raters to annotate nearly 6000 utter- ances). We present its evaluation not merely in terms of inter-rater agreement but also in terms of usability of the resulting reference dataset both from a linguistic research per- spective and from a more applicative view- point

    An Open-Domain Dialog Act Taxonomy

    Get PDF
    This document defines the taxonomy of dialog acts that are necessary to encode domain-independent dialog moves in the context of a task-oriented, open-domain dialog. Such taxonomy is formulated to satisfy two complementary requirements: on the one hand, domain independence, i.e. the power to cover all the range of possible interactions in any type of conversation (particularly conversation oriented to the performance of tasks). On the other hand, the ability to instantiate a concrete set of tasks as defined by a specific knowledge base (such as an ontology of domain concepts and actions) and within a particular language. For the modeling of dialog acts, inspiration is taken from several well-known dialog annotation schemes, such as DAMSL (Core & Allen, 1997), TRAINS (Traum, 1996) and VERBMOBIL (Alexandersson et al., 1997)

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    A semiotic perspective on webconferencing-supported language teaching.

    Get PDF
    International audienceIn webconferencing-supported teaching, the webcam mediates and organizes the pedagogical interaction. Previous research has provided a mixed picture of the use of the webcam: while it is seen as a useful medium to contribute to the personalization of the interlocutors’ relationship, help regulate interaction and facilitate learner comprehension and involvement, the limited access to visual cues provided by the webcam is felt as useless or even disruptive. This study examines the meaning-making potential of the webcam in pedagogical interactions from a semiotic perspective by exploring how trainee teachers use the affordances of the webcam to produce non-verbal cues that may be useful for mutual comprehension. The research context is a telecollaborative project where trainee teachers of French as a foreign language met for online sessions in French with undergraduate Business students at an Irish university. Using multimodal transcriptions of the interaction data from these sessions, screen shot data, and students’ post-course interviews, it was found, firstly, that whilst a head and shoulders framing shot was favoured by the trainee teachers, there does not appear to be an optimal framing choice for desktop videoconferencing among the three framing types identified. Secondly, there was a loss between the number of gestures performed by the trainee teachers and those that were visible for the students. Thirdly, when trainee teachers were able to coordinate the audio and kinesic modalities, communicative gestures that were framed, and held long enough to be perceived by the learners, were more likely to be valuable for mutual comprehension. The study highlights the need for trainee teachers to develop critical semiotic awareness to gain a better perception of the image they project of themselves in order to actualise the potential of the webcam and add more relief to their online teacher presence

    Interactions between text chat and audio modalities for L2 communication and feedback in the synthetic world Second Life.

    Get PDF
    This paper reports on a study of the interactions between text chat and audio modalities in L2 interaction in a synthetic (virtual) world and observes whether the text chat modality was used for corrective feedback and the characteristics of the latter. This is examined within the context of a hybrid Content and Language Integrated Learning design workshop. This course involved 17 students of architecture whose L2 was either French or English and for which the synthetic world environment Second Life was employed for distance language sessions. Using multimodal transcriptions of the interaction data from these sessions, it was found that text chat was employed for content-based interaction concerning the task as well as feedback concerning non-target-like errors in the audio modality. Feedback predominantly concerned lexical errors and was offered in the form of recasts. The multimodality of the environment did not appear to cognitively overload students who frequently responded in the audio modality to corrective feedback offered in the text chat. The study highlights the need to train language tutors who wish to exploit synthetic worlds to use the text chat in parallel with the audio to support learners' verbal production with respect to verbal participation and proficiency

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data
    corecore