54,244 research outputs found

    Capturing the sociomateriality of digital literacy events

    Get PDF
    This paper discusses a method of collecting and analysing multimodal data during classroom-based digital literacy research. Drawing on reflections from two studies, the authors discuss theoretical and methodological implications encountered in the collection, transcription and presentation of such data. Following an ethnomethodological framework that co-develops theory and methodology, the studies capture digital literacy activities as real-time screen recordings, with embedded video recordings of participants’ movements and vocalisations around the tasks during writing. The result is a multimodal rendition of digital literacy events on- and off-screen, allowing linguistic and multimodal transcriptions to capture the complexity of the data in a format amenable to analysis. Acquiring such data allowed for the development of detailed analyses of digital literacy events in the classroom, including interaction that would otherwise have escaped standard ethnography and video analysis, through sensibilities that approach social and material items without a priori hierarchies. This leads us to a ‘performative’ notion of digital literacies and an analytic methodology that is useful for researchers paying greater attention to the sociomaterial assemblages in which digital literacy events unfold

    Challenges in Transcribing Multimodal Data: A Case Study

    Get PDF
    open2siComputer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS, etc.), has become normalized practice in personal and professional lives, educational initiatives, particularly language teaching and learning, are following suit. For researchers interested in exploring learner interactions in complex technology-supported learning environments, new challenges inevitably emerge. This article looks at the challenges of transcribing and representing multimodal data (visual, oral, and textual) when engaging in computer-assisted language learning research. When transcribing and representing such data, the choices made depend very much on the specific research questions addressed, hence in this paper we explore these challenges through discussion of a specific case study where the researchers were seeking to explore the emergence of identity through interaction in an online, multimodal situated space. Given the limited amount of literature addressing the transcription of online multimodal communication, it is felt that this article is a timely contribution to researchers interested in exploring interaction in CMC language and intercultural learning environments.Cited 10 times as of November 2020 including the prestigious Language Learning Sans Frontiers: A Translanguaging View L Wei, WYJ Ho - Annual Review of Applied Linguistics, 2018 - cambridge.org In this article, we present an analytical approach that focuses on how transnational and translingual learners mobilize their multilingual, multimodal, and multisemiotic repertoires, as well as their learning and work experiences, as resources in language learning. The … Cited by 23 Related articles All 11 versionsopenFrancesca, Helm; Melinda DoolyHelm, Francesca; Melinda, Dool

    First impressions: A survey on vision-based apparent personality trait analysis

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft

    A single case study of a family-centred intervention with a young girl with cerebral palsy who is a multimodal communicator

    Get PDF
    Background - This paper describes the impact of a family-centred intervention that used video to enhance communication in a young girl with cerebral palsy. This single case study describes how the video-based intervention worked in the context of multimodal communication, which included high-tech augmentative and alternative communication (AAC) device use. This paper includes the family's perspective of the video intervention and they describe the impact of it on their family. Methods - This single case study was based on the premise that the video interaction guidance intervention would increase attentiveness between participants during communication. It tests a hypothesis that eye gaze is a fundamental prerequisite for all communicative initiatives, regardless of modality in the child. Multimodality is described as the range of communicative behaviours used by the child and these are coded as AAC communication, vocalizations (intelligible and unintelligible), sign communication, nodding and pointing. Change was analysed over time with multiple testing both pre and post intervention. Data were analysed within INTERACT, a computer software to analyse behaviourally observed data. Behaviours were analysed for frequency and duration, contingency and co-occurrence. Results - Results indicated increased duration of mother's and girl's eye gaze, increased frequency and duration in AAC communication by the girl and significant change in frequency [χ2 (5, n = 1) = 13.25, P < 0.05] and duration [χ2 (5, n = 1) = 12.57, P < 0.05] of the girl's multimodal communicative behaviours. Contingency and co-occurrence analysis indicated that mother's eye gaze followed by AAC communication was the most prominent change between the pre- and post-intervention assessments. Conclusions - There was a trend for increased eye gaze in both mum and girl and AAC communication in the girl following the video intervention. The family's perspective concurs with the results

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure
    corecore