35 research outputs found

    Reflecting on the Experience the Self in the Smart City

    Get PDF
    There is little doubt that the internet has changed the world. Now comes the next revolution: when our things connect with each other. In a few years, it’s predicted that 100 billion devices and gadgets will be communicating with each other. This evolution of IoT has largely been driven by developments in technology. Computing power and connectivity are becoming smaller, cheaper and more energy efficient, making it possible to connect and augment an increasing range of objects. However, the interaction between people in the landscape, computational sensors, and the systems that collect, analyze, and make sense and present back to us the people are highly complex. In order to reflect and understand on what it means to be human in the world we share with our technologies that reduce our individuals to data points we investigate dance, technology and lived experience through embodied relational biofeedback and materials of the human and non-humankind. Our approach is develop performance by combining the design of technology through the use of self-reflexive and hermeneutic methodologies to collapse anthropocentric ways of seeing the dancing body, with ways of feeling whilst embodying biosensor technologies

    Tracing physical movement during practice-based learning through Multimodal Learning Analytics

    Get PDF
    In this paper, we pose the question, can the tracking and analysis of the physical movements of students and teachers within a Practice-Based Learning (PBL) environment reveal information about the learning process that is relevant and informative to Learning Analytics (LA) implementations? Using the example of trials conducted in the design of a LA system, we aim to show how the analysis of physical movement from a macro level can help to enrich our understanding of what is happening in the classroom. The results suggest that Multimodal Learning Analytics (MMLA) could be used to generate valuable information about the human factors of the collaborative learning process and we propose how this information could assist in the provision of relevant supports for small group work. More research is needed to confirm the initial findings with larger sample sizes and refine the data capture and analysis methodology to allow automation

    Modelling collaborative problem-solving competence with transparent learning analytics: is video data enough?

    Get PDF
    In this study, we describe the results of our research to model collaborative problem-solving (CPS) competence based on analytics generated from video data. We have collected ~500 mins video data from 15 groups of 3 students working to solve design problems collaboratively. Initially, with the help of OpenPose, we automatically generated frequency metrics such as the number of the face-in-the-screen; and distance metrics such as the distance between bodies. Based on these metrics, we built decision trees to predict students' listening, watching, making, and speaking behaviours as well as predicting the students' CPS competence. Our results provide useful decision rules mined from analytics of video data which can be used to inform teacher dashboards. Although, the accuracy and recall values of the models built are inferior to previous machine learning work that utilizes multimodal data, the transparent nature of the decision trees provides opportunities for explainable analytics for teachers and learners. This can lead to more agency of teachers and learners, therefore can lead to easier adoption. We conclude the paper with a discussion on the value and limitations of our approach

    Estimation of Success in Collaborative Learning Based on Multimodal Learning Analytics Features

    Get PDF
    Multimodal learning analytics provides researchers new tools and techniques to capture different types of data from complex learning activities in dynamic learning environments. This paper investigates high-fidelity synchronised multimodal recordings of small groups of learners interacting from diverse sensors that include computer vision, user generated content, and data from the learning objects (like physical computing components or laboratory equipment). We processed and extracted different aspects of the students' interactions to answer the following question: which features of student group work are good predictors of team success in open-ended tasks with physical computing? The answer to the question provides ways to automatically identify the students' performance during the learning activities

    Current and future multimodal learning analytics data challenges

    Get PDF
    Multimodal Learning Analytics (MMLA) captures, integrates and analyzes learning traces from different sources in order to obtain a more holistic understanding of the learning process, wherever it happens. MMLA leverages the increasingly widespread availability of diverse sensors, highfrequency data collection technologies and sophisticated machine learning and artificial intelligence techniques. The aim of this workshop is twofold: first, to expose participants to, and develop, different multimodal datasets that reflect how MMLA can bring new insights and opportunities to investigate complex learning processes and environments; second, to collaboratively identify a set of grand challenges for further MMLA research, built upon the foundations of previous workshops on the topic

    CROSSMMLA Futures: Collecting and analysing multimodal data across the physical and the virtual

    Get PDF
    Workshop proposal for CrossMMLA focused on collecting and analysing multimodal data across the physical and the virtual. Under the current global pandemic, cross physical and virtual spaces play a substantial factor and challenge for MMLA, which is focused on collaborative learning in physical spaces. The workshop proposes an asynchronous format that includes pre-recorded video demonstrations and position papers for discussion, followed by a half-day virtual meeting at LAK'2021

    Physical computing with plug-and-play toolkits: Key recommendations for collaborative learning implementations

    Get PDF
    Physical computing toolkits have long been used in educational contexts to learn about computational concepts by engaging in the making of interactive projects. This paper presents a comprehensive toolkit that can help educators teach programming with an emphasis on collaboration, and provides suggestions for its effective pedagogical implementation. The toolkit comprises the Talkoo kit with physical computing plug-and-play modules and a visual programming environment. The key suggestions are inspired by the results of the evaluation studies which show that children (aged 14–18 in a sample group of 34 students) are well motivated when working with the toolkit but lack confidence in the kit's support for collaborative learning. If the intention is to move beyond tools and code in computer education to community and context, thus encouraging computational participation, collaboration should be considered as a key aspect of physical computing activities. Our approach expands the field of programming with physical computing for teenage children with a focus on empowering teachers and students with not only a kit but also its appropriate classroom implementation for collaborative learning

    Preface: CrossMMLA in practice: Collecting, annotating and analyzing multimodal data across spaces

    Get PDF
    Learning is a complex processthat is associated with many aspects of interaction and cognition (e.g., hard mental operations, cognitive friction etc.) and that can take across diverse contexts (online, classrooms, labs, maker spaces, etc.). The complexity of this process and its environments means that it is likely that no single data modality can paint a complete picture of the learning experience, requiring multiple data streams from different sources and times to complement each other. The need to understand and improve learning that occurs in ever increasingly open, distributed, subject-specific and ubiquitous scenarios, require the development of multimodal and multisystem learning analytics. Following the tradition of CrossMMLA workshop series, the proposed workshop aims to serve as a place to learn about the latest advances in the design, implementation and adoption of systems that take into account the different modalities of human learning and the diverse settings in which it takes place. Apart from the necessary interchange of ideas, it is also the objective of this workshop to develop critical discussion, debate and co-development of ideas for advancing the state-of-the-art in CrossMMLA

    2nd Crossmmla: Multimodal learning analytics across physical and digital spaces

    Get PDF
    © 2018 CEUR-WS. All Rights Reserved. Students’ learning is ubiquitous. It happens wherever the learner is rather than being constrained to a specific physical or digital learning space (e.g. the classroom or the institutional LMS respectively). A critical question is: how to integrate and coordinate learning analytics to provide continued support to learning across physical and digital spaces? CrossMMLA is the successor to the Learning Analytics Across Spaces (CrossLAK) and MultiModal Learning Analytics (MMLA) series of workshops that were merged in 2017 after successful cross-pollination between the two communities. Although it may be said that CrossLAK and MMLA perspectives follow different philosophical and practical approaches, they both share a common aim. This aim is: deploying learning analytics innovations that can be used across diverse authentic learning environments whilst learners feature various modalities of interaction or behaviour

    Quantifying Collaboration Quality in Face-to-Face Classroom Settings Using MMLA

    Get PDF
    Producción CientíficaThe estimation of collaboration quality using manual observation and coding is a tedious and difficult task. Researchers have proposed the automation of this process by estimation into few categories (e.g., high vs. low collaboration). However, such categorical estimation lacks in depth and actionability, which can be critical for practitioners. We present a case study that evaluates the feasibility of quantifying collaboration quality and its multiple sub-dimensions (e.g., collaboration flow) in an authentic classroom setting. We collected multimodal data (audio and logs) from two groups collaborating face-to-face and in a collaborative writing task. The paper describes our exploration of different machine learning models and compares their performance with that of human coders, in the task of estimating collaboration quality along a continuum. Our results show that it is feasible to quantitatively estimate collaboration quality and its sub-dimensions, even from simple features of audio and log data, using machine learning. These findings open possibilities for in-depth automated quantification of collaboration quality, and the use of more advanced features and algorithms to get their performance closer to that of human coders.European Union via the European Regional Development Fund and in the context of CEITER and Next-Lab (Horizon 2020 Research and Innovation Programme, grant agreements no. 669074 and 731685)Junta de Castilla y León (Project VA257P18)Ministerio de Ciencia, Innovación y Universidades (Project TIN2017-85179-C3-2-R
    corecore