1,165 research outputs found

    Exploring the Effectiveness of AI Algorithms in Predicting and Enhancing Student Engagement in an E-Learning

    Get PDF
    The shift from traditional to digital learning platforms has highlighted the need for more personalized and engaging student experiences. In response, researchers are investigating AI algorithms' ability to predict and improve e-learning student engagement.  Machine Learning (ML) methods like Decision Trees, Support Vector Machines, and Deep Learning models can predict student engagement using variables like interaction patterns, learning behavior, and academic performance. These AI algorithms have identified at-risk students, enabling early interventions and personalized learning. By providing adaptive content, personalized feedback, and immersive learning environments, some AI methods have increased student engagement. Despite these advances, data privacy, unstructured data, and transparent and interpretable models remain challenges. The review concludes that AI has great potential to improve e-learning outcomes, but these challenges must be addressed for ethical and effective applications. Future research should develop more robust and interpretable AI models, multidimensional engagement metrics, and more comprehensive studies on AI's ethical implications in education

    Modeling crowd work in open task systems

    Get PDF
    This thesis aims to harness modern machine learning techniques to understand how and why people interact in large and open, collaborative online platforms: task systems. The participants who interact with the task systems have a diverse set of goals and reasons for contributing and the data that is logged from their participation is often observational. These two factors present many challenges for researchers who wish to understand the motivations for continued contributions to these projects such as Wikipedia and Stack Overflow. Existing approaches to scientific investigation in such domains often take a “one-size-fits-all” approach where aggregated trends are studied and conclusions are drawn from overview statistics. In contrast to these approaches, I motivate a three-stage framework for scientific enquiry into the behaviour of participants in task systems. First I propose a modelling step where assumptions and hypotheses from Behavioural Sciences are encoded directly into a model’s structure. I will show that it is important to allow for multiple competing hypotheses in one model. It is due to the diversity of the participants’ goals and motivations that it is important to have a range of hypotheses that may account for different interaction patterns present in the data. Second, I design deep generative models for harnessing both the power of deep learning and the structured inference of variational methods to infer parameters that fit the structured models from the first step. Such methods allow us to perform maximum likelihood estimation of parameter values while harnessing amortised learning across a dataset. The inference schemes proposed here allow for posterior assignment of interaction data to specific hypotheses, giving insight into the validity of a hypoth- esis. It also naturally allows for inference over both categorical and continuous latent variables in one model - an aspect that is crucial in modelling data where competing hypotheses that describe the users’ interaction are present. Finally, in working to understand how and why people interact in such online settings, we are required to understand the model parameters that are associated with the various aspects of their interaction. In many cases, these parameters are given specific meaning by construction of the model, however, I argue that it is still important to evaluate the interpretability of such models and I, therefore, investigate several tests for performing such an evaluation. My contributions additionally entail designing bespoke models that describe people’s interactions in complex and online domains. I present examples from real-world domains where the data consist of people’s actual interactions with the system

    Primary interoceptive cortex activity during simulated experiences of the body

    Get PDF
    Studies of the classic exteroceptive sensory systems (e.g., vision, touch) consistently demonstrate that vividly imagining a sensory experience of the world – simulating it – is associated with increased activity in the corresponding primary sensory cortex. We hypothesized, analogously, that simulating internal bodily sensations would be associated with increased neural activity in primary interoceptive cortex. An immersive, language-based mental imagery paradigm was used to test this hypothesis (e.g., imagine your heart pounding during a roller coaster ride, your face drenched in sweat during a workout). During two neuroimaging experiments, participants listened to vividly described situations and imagined “being there” in each scenario. In Study 1, we observed significantly heightened activity in primary interoceptive cortex (of dorsal posterior insula) during imagined experiences involving vivid internal sensations. This effect was specific to interoceptive simulation: it was not observed during a separate affect focus condition in Study 1, nor during an independent Study 2 that did not involve detailed simulation of internal sensations (instead involving simulation of other sensory experiences). These findings underscore the large-scale predictive architecture of the brain and reveal that words can be powerful drivers of bodily experiences

    Using Virtual Reality and Remotely Sensed Data to Explore Object Identity and Embodiment in a Virtual Mayan City

    Get PDF
    3D visualization, LiDAR (Light Detection and Ranging), and 3D modeling are not new concepts in archaeology, however when combined they represent a growing body of research that seeks to understand both how these tools can help us to study the people of the past, and the past itself. Recently, archaeologists have been creating large amounts of 3D digital assets because of new and more advanced technologies. Along with these digital assets has come a myriad of single object viewers—both web and desktop based. These platforms specifically focus on visualizing individual objects (i.e., artifacts or buildings). In contrast, 3DGIS and Virtual Reality (VR) software employ recreated landscapes with multiple 3D objects rather than single 3D models. The MayaCityBuilder Project (http://mayacitybuilder.org) employs Geographic Information Systems (GIS) and LIDAR data to simulate the ancient Maya city of Copan in a virtual space for immersive exploration. Using this environment as a virtual lattice, we embed object data into the actual simulated space of Copan, which users can explore using a virtual reality headset. I propose that such an environment allows us to explore the concept of object identity. Wherein the “objects” in the environment (i.e. 3D models of both remotely sensed extant objects and reconstructed buildings) are immersively evaluated by users who can better perceive the relationships between themselves and the “objects” with which they are interacting; resulting in insights that can push archaeological inquiry in new directions. Further, applying such an approach opens the door for 3D data reuse providing a platform that serves a unique database structure holding intuitive and perceptual data. In order to test these ideas, I embed multiple kinds of 3D models into the Copan VR platform and use the relationships between both the environment and the objects to explain object identity. Advisor: Heather Richards-Rissett

    A Qualification of 3D Geovisualisation

    Get PDF

    Educational practices and strategies with immersive learning environments: mapping of reviews for using the metaverse

    Get PDF
    The educational metaverse promises fulfilling ambitions of immersive learning, leveraging technology-based presence alongside narrative and/or challenge-based deep mental absorption. Most reviews of immersive learning research were outcomes-focused, few considered the educational practices and strategies. These are necessary to provide theoretical and pedagogical frameworks to situate outcomes within a context where technology is in concert with educational approaches. We sought a broader perspective of the practices and strategies used in immersive learning environments, and conducted a mapping survey of reviews, identifying 47 studies. Extracted accounts of educational practices and strategies under thematic analysis yielded 45 strategies and 21 practices, visualized as a network clustered by conceptual proximity. Resulting clusters “Active context”, “Collaboration”, “Engagement and Scaffolding”, “Presence”, and “Real and virtual multimedia learning” expose the richness of practices and strategies within the field. The visualization maps the field, supporting decision-making when combining practices and strategies for using the metaverse in education, highlights which practices and strategies are supported by the literature, and the presence and absence of diversity within clusters.info:eu-repo/semantics/acceptedVersio

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
    • 

    corecore