1,050 research outputs found

    Modelling collaborative problem-solving competence with transparent learning analytics: is video data enough?

    Get PDF
    In this study, we describe the results of our research to model collaborative problem-solving (CPS) competence based on analytics generated from video data. We have collected ~500 mins video data from 15 groups of 3 students working to solve design problems collaboratively. Initially, with the help of OpenPose, we automatically generated frequency metrics such as the number of the face-in-the-screen; and distance metrics such as the distance between bodies. Based on these metrics, we built decision trees to predict students' listening, watching, making, and speaking behaviours as well as predicting the students' CPS competence. Our results provide useful decision rules mined from analytics of video data which can be used to inform teacher dashboards. Although, the accuracy and recall values of the models built are inferior to previous machine learning work that utilizes multimodal data, the transparent nature of the decision trees provides opportunities for explainable analytics for teachers and learners. This can lead to more agency of teachers and learners, therefore can lead to easier adoption. We conclude the paper with a discussion on the value and limitations of our approach

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Proceedings, MSVSCC 2013

    Get PDF
    Proceedings of the 7th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 11, 2013 at VMASC in Suffolk, Virginia

    Learning Sciences Beyond Cognition: Exploring Student Interactions in Collaborative Problem Solving

    Get PDF
    Composed of insightful essays from top figures in their respective fields, the book also shows how a thorough understanding of this critical discipline all but ensures better decision making when it comes to education

    Individuality and the collective in AI agents: Explorations of shared consciousness and digital homunculi in the metaverse for cultural heritage

    Get PDF
    The confluence of extended reality (XR) technologies, including augmented and virtual reality, with large language models (LLM) marks a significant advancement in the field of digital humanities, opening uncharted avenues for the representation of cultural heritage within the burgeoning metaverse. This paper undertakes an examination of the potentialities and intricacies of such a convergence, focusing particularly on the creation of digital homunculi or changelings. These virtual beings, remarkable for their sentience and individuality, are also part of a collective consciousness, a notion explored through a thematic comparison in science fiction with the Borg and the Changelings in the Star Trek universe. Such a comparison offers a metaphorical framework for discussing complex phenomena such as shared consciousness and individuality, illuminating their bearing on perceptions of self and awareness. Further, the paper considers the ethical implications of these concepts, including potential loss of individuality and the challenges inherent to accurate representation of historical figures and cultures. The latter necessitates collaboration with cultural experts, underscoring the intersectionality of technological innovation and cultural sensitivity. Ultimately, this chapter contributes to a deeper understanding of the technical aspects of integrating large language models with immersive technologies and situates these developments within a nuanced cultural and ethical discourse. By offering a comprehensive overview and proposing clear recommendations, the paper lays the groundwork for future research and development in the application of these technologies within the unique context of cultural heritage representation in the metaverse

    Augmented Reality to Facilitate a Conceptual Understanding of Statics in Vocational Education

    Get PDF
    At the core of the contribution of this dissertation there is an augmented reality (AR) environment, StaticAR, that supports the process of learning the fundamentals of statics in vocational classrooms, particularly in carpentry ones. Vocational apprentices are expected to develop an intuition of these topics rather than a formal comprehension. We have explored the potentials of the AR technology for this pedagogical challenge. Furthermore, we have investigated the role of physical objects in mixed-reality systems when they are implemented as tangible user interfaces (TUIs) or when they serve as a background for the augmentation in handheld AR. This thesis includes four studies. In the first study, we used eye-tracking methods to look for evidences of the benefits associated to TUIs in the learning context. We designed a 3D modelling task and compared users' performance when they completed it using a TUI or a GUI. The gaze measures that we analysed further confirmed the positive impact that TUIs can have on the learners' experience and enforced the empirical basis for their adoption in learning applications. The second study evaluated whether the physical interaction with models of carpentry structures could lead to a better understanding of statics principles. Apprentices engaged in a learning activity in which they could manipulate physical models that were mechanically augmented, allowing for exploring how structures react to external loads. The analysis of apprentices' performance and their gaze behaviors highlighted the absence of clear advantages in exploring statics through manipulation. This study also showed that the manipulation might prevent students from noticing aspects relevant for solving statics problems. From the second study we obtained guidelines to design StaticAR which implements the magic-lens metaphor: a tablet augments a small-scale structure with information about its structural behavior. The structure is only a background for the augmentation and its manipulation does not trigger any function, so in the third study we asked to what extent it was important to have it. We rephrased this question to whether users would look directly at the structure instead of seeing it only through a tablet. Our findings suggested that a shift of attention from the screen to the physical object (a structure in our case) might occur in order to sustain users' spatial orientation when they change positions. In addition, the properties of the gaze shift (e.g. duration) could depend on the features of the task (e.g. difficulty) and of the setup (e.g. stability of the augmentation). The focus of our last study was the digital representation of the forces that act in a loaded structure. From the second study we observed that the physical manipulation failed to help apprentices understanding the way the forces interact with each other. To overcome this issue, our solution was to combine an intuitive representation (springs) with a slightly more formal one (arrows) which would show both the nature of the forces and the interaction between them. In this study apprentices used the two representations to collaboratively solve statics problems. Even though apprentices had difficulties in interpreting the two representations, there were cases in which they gained a correct intuition of statics principles from them. In this thesis, besides describing the designed system and the studies, implications for future directions are discussed
    • 

    corecore