872 research outputs found

    Enhancing Multimodal Interaction and Communicative Competence through Task-Based Language Teaching (TBLT) in Synchronous Computer-Mediated Communication (SCMC)

    Get PDF
    The number of publications on live online teaching and distance learning has significantly increased over the past two years since the outbreak and worldwide spread of the COVID-19 pandemic, but more research is needed on effective methodologies and their impact on the learning process. This research aimed to analyze student interaction and multimodal communication through Task-Based Language Teaching (TBLT) in a Synchronous Computer-Mediated Communication (SCMC) environment. For this purpose, 90 teacher candidates enrolled in the subject Applied Linguistics at a university were randomly assigned in different teams to create collaboratively digital infographics based on different language teaching methods. Then, all the teams explained their projects online and the classmates completed two multimedia activities based on each method. Finally, the participants discussed the self-perceived benefits (relevance, enjoyment, interest) and limitations (connectivity, distraction) of SCMC in language learning. Quantitative and qualitative data were gathered through pre- and post-tests, class observation and online discussion. The statistical data and research findings revealed a positive attitude towards the integration of TBLT in an SCMC environment and a high level of satisfaction with multimodal communication (written, verbal, visual) and student interaction. However, the language teacher candidates complained about the low quality of the digital materials, the use of technology just for substitution, and the lack of peer-to-peer interaction in their live online classes during the pandemic

    Digital video revisited: Storytelling, conferencing, remixing

    Get PDF

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance

    Atelier : assistive thechnologies for learning, integration and reabilitation

    Get PDF
    A special needs individual is a broad term used to describe a person with a behavioural or emotional disorder, physical disability or learning disability. Many individuals with special needs are limited in verbal communication, or in many cases non-verbal, making communication and learning a challenging task. Additionally, new forms of communication based on technology aren´t designed for them, making them increasingly isolated in social and educational terms. In spite of this, and fortunately, new forms of interaction do exist and they enable these particular users to access knowledge and provide them with the ability to interact with others, undertaking otherwise impossible. In this project the technology used will not be an end in itself but only a way to “drop” the mouse/keyboard paradigm making use of affordable devices available in the market that could be adopted by people with special needs that are unable to apply the traditional forms of interaction, thus assisting people in their education, integration and rehabilitation activities.info:eu-repo/semantics/publishedVersio

    FILTWAM - A Framework for Online Game-based Communication Skills Training - Using Webcams and Microphones for Enhancing Learner Support

    Get PDF
    Bahreini, K., Nadolski, R., Qi, W., & Westera, W. (2012). FILTWAM - A Framework for Online Game-based Communication Skills Training - Using Webcams and Microphones for Enhancing Learner Support. In P. Felicia (Ed.), The 6th European Conference on Games Based Learning - ECGBL 2012 (pp. 39-48). Cork, Ireland: University College Cork and the Waterford Institute of Technology.This paper provides an overarching framework embracing conceptual and technical frameworks for improving the online communication skills of lifelong learners. This overarching framework is called FILTWAM (Framework for Improving Learning Through Webcams And Microphones). We propose a novel web-based communication training approach, one which incorporates relevant and timely feedback based upon learner's facial expressions and verbalizations. This data is collected using webcams with their incorporated image and microphones with their sound waves, which can continuously and unobtrusively monitor and interpret learners' emotional behaviour into emotional states. The feedback generated from the webcams is expected to enhance learner’s awareness of their own behaviour as well as to improve the alignment between their expressed behaviour and intended behaviour. Our approach emphasizes communication behaviour rather than communication content, as people mostly do not have problems with the "what" but with the 'how" in expressing their message. For our design of online game-based communication skills trainings, we use insights from face-to-face training, game-based learning, lifelong learning, and affective computing. These areas constitute starting points for moving ahead the not yet well-established area of using emotional states for improved learning. Our framework and research is situated within this latter area. A self-contained game-based training enhances flexibility and scalability, in contrast with face-to-face trainings. Furthermore, game-based training better serve the interests of lifelong learners who prefer to study at their own pace, place and time. In the future we may possibly integrate the generated feedback with EMERGO, which is a game-based toolkit for delivery of multimedia cases. Finally, we will report on a small-scale proof of concept study that on the one hand exemplifies the practical application of our framework and on the other hand provides first evaluation results on that. This study will guide further development of software and training materials and inform future research. Moreover, it will validate the use of webcam data for a real-time and adequate interpretation of facial expressions into emotional states (like sadness, anger, disgust, fear, happiness, and surprise). For this purpose, participants' behaviour is also recorded on videos so that videos will be replayed, rated, annotated and evaluated by expert observers and contrasted with participants' own opinions.CELSTEC Open University of the Netherlands, NeLL
    corecore