25,042 research outputs found

    Multimodal system for public speaking with real time feedback: a positive computing perspective

    Get PDF
    A multimodal system for public speaking with real time feedback has been developed using the Microsoft Kinect. The system has been developed within the paradigm of positive computing which focuses on designing for user wellbeing. The system detects body pose, facial expressions and voice. Visual feedback is displayed to users on their speaking performance in real time. Users can view statistics on their utilisation of speaking modalities. The system also has a mentor avatar which appears alongside the user avatar to facilitate user training. Autocue mode allows a user to practice with set text from a chosen speech

    Practising public speaking: user responses to using a mirror versus a multimodal positive computing system

    Get PDF
    A multimodal Positive Computing system with real-time feedback for public speaking has been developed. The system uses the Microsoft Kinect to detect voice, body pose, facial expressions and gestures. The system is a real-time system, which gives users feedback on their performance while they are rehearsing a speech. In this study, we wished to compare this system with a traditional method for practising speaking, namely using a mirror. Ten participants practised a speech for sixty seconds using the system and using the mirror. They completed surveys on their experience after each practice session. Data about their performance was recorded while they were speaking. We found that participants found the system less stressful to use than using the mirror. Participants also reported that they were more motivated to use the system in future. We also found that the system made speakers more aware of their body pose, gaze direction and voice

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Using gaming paratexts in the literacy classroom

    Get PDF
    This paper illustrates how digital game paratexts may effectively be used in the high school English to meet a variety of traditional and multimodal literacy outcomes. Paratexts are texts that refer to digital gaming and game cultures, and using them in the classroom enables practitioners to focus on and valorise the considerable literacies and skills that young people develop and deploy in their engagement with digital gaming and game cultures. The effectiveness of valorizing paratexts in this manner is demonstrated through two examples of assessment by students in classes where teachers had designed curriculum and assessment activities using paratexts

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Presentation Trainer, your Public Speaking Multimodal Coach

    Get PDF
    A paper describing an experiment on the Presentation TrainerThe Presentation Trainer is a multimodal tool designed to support the practice of public speaking skills, by giving the user real-time feedback about different aspects of her nonverbal communication. It tracks the user’s voice and body to interpret her current performance. Based on this performance the Presentation Trainer selects the type of intervention that will be presented as feedback to the user. This feedback mechanism has been designed taking in consideration the results from previous studies that show how difficult it is for learners to perceive and correctly interpret real- time feedback while practicing their speeches. In this paper we present the user experience evaluation of participants who used the Presentation Trainer to practice for an elevator pitch, showing that the feedback provided by the Presentation Trainer has a significant influence on learning.The underlying research project is partly funded by the METALOGUE project. METALOGUE is a Seventh Framework Programme collabo- rative project funded by the European Commission, grant agreement number: 611073 (http://www.metalogue.eu)

    Teaching learners to communicate effectively in the L2: Integrating body language in the students\u2019 syllabus

    Get PDF
    In communication a great deal of meaning is exchanged through body language, including gaze, posture, hand gestures and body movements. Body language is largely culture-specific, and rests, for its comprehension, on people\u2019s sharing socio-cultural and linguistic norms. In cross-cultural communication, L2 speakers\u2019 use of body language may convey meaning that is not understood or misinterpreted by the interlocutors, affecting the pragmatics of communication. In spite of its importance for cross-cultural communication, body language is neglected in ESL/EFL teaching. This paper argues that the study of body language should be integrated in the syllabus of ESL/EFL teaching and learning. This is done by: 1) reviewing literature showing the tight connection between language, speech and gestures and the problems that might arise in cross-cultural communication when speakers use and interpret body language according to different conventions; 2) reporting the data from two pilot studies showing that L2 learners transfer L1 gestures to the L2 and that these are not understood by native L2 speakers; 3) reporting an experience teaching body language in an ESL/EFL classroom. The paper suggests that in multicultural ESL/EFL classes teaching body language should be aimed primarily at raising the students\u2019 awareness of the differences existing across cultures
    corecore