1,822 research outputs found

    Leveling the Playing Field: Supporting Neurodiversity via Virtual Realities

    Get PDF
    Neurodiversity is a term that encapsulates the diverse expression of human neurology. By thinking in broad terms about neurological development, we can become focused on delivering a diverse set of design features to meet the needs of the human condition. In this work, we move toward developing virtual environments that support variations in sensory processing. If we understand that people have differences in sensory perception that result in their own unique sensory traits, many of which are clustered by diagnostic labels such as Autism Spectrum Disorder (ASD), Sensory Processing Disorder, Attention-Deficit/Hyperactivity Disorder, Rett syndrome, dyslexia, and so on, then we can leverage that knowledge to create new input modalities for accessible and assistive technologies. In an effort to translate differences in sensory perception into new variations of input modalities, we focus this work on ASD. ASD has been characterized by a complex sensory signature that can impact social, cognitive, and communication skills. By providing assistance for these diverse sensory perceptual abilities, we create an opportunity to improve the interactions people have with technology and the world. In this paper, we describe, through a variety of examples, the ways to address sensory differences to support neurologically diverse individuals by leveraging advances in virtual reality

    3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities

    Get PDF
    Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered

    Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments

    Get PDF
    Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object were integrated together either symmetrically or asymmetrically. The former only allowed the common component of participants' actions to take place, but the latter used the mean. Symmetric action integration was superior for sections of the task when both participants had to perform similar actions, but if participants had to move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the reduction in performance caused by having to cooperate with another person)

    Material Sight: A Sensorium for Fundamental Physics

    Get PDF
    Often our attempts to connect to the spatial and temporal scales of fundamental physics - from the subatomic to the multiverse - provoke a form of perceptual vertigo, especially for non-scientists. When we approach ideas of paralysing abstraction through the perceptual range of our sensing bodies, a ‘phenomenological dissonance’ can be said to be invoked, between material presence and radical remoteness. This relational dynamic, between materiality and remoteness, formed the conceptual springboard for 'Material Sight' (2016-2018), a research project based at three world-leading facilities for fundamental physics, that brought to fruition a body of photographic objects, film works and immersive soundscape that re-presented the spaces of fundamental physics as sites of material encounter. The research was premised on a paradoxical desire to create a sensorium for fundamental physics, asking if photography, film and sound can embody the spaces of experimental science and present them back to scientists and non-scientists alike, not as illustrations of the technical sublime but as sites of phenomenological encounter. This article plots the key conceptual coordinates of 'Material Sight' and looks at how the project’s methodological design – essentially the production of knowledge through the 'act of looking' – emphatically resisted the gravitational pull of art to be instrumentalised as an illustrative device within scientific contexts

    Closing the chasm between virtual and physical delivery for innovative learning spaces using learning analytics

    Get PDF
    Purpose – One of the misconceptions of teaching and learning for practical-based programmes, such as engineering, sciences, architecture, design and arts, is the necessity to deliver via face-to-face physical modality. This paper refutes this claim by providing case studies of best practices in delivering such courses and their hands-on skillsets using completely online virtual delivery that utilises different formats of 2D and 3D media and tools, supported by evidence of efficiency using learning analytics. Design/methodology/approach – The case studies were designed using pedagogical principles of constructivism and deep learning, conducted within a mixture of 2D and 3D virtual learning environments with flexible interface and tools capabilities. State-of-the-art coding and scripting techniques were also used to automate different student tasks and increase engagement. Regression and descriptive analysis methods were used for Learning Analytics. Findings – Learning analytics of all case studies demonstrated the capability to achieve course/project learning outcomes, with high engagement from students amongst peers and with tutors. Furthermore, the diverse virtual learning tools used, allowed students to display creativity and innovation efficiently analogous to physical learning. Originality/value – The synthesis of utilised media and tools within this study displays innovation and originality in combining different technology techniques to achieve an effectual learning experience. That would usually necessitate face-to-face, hands-on physical contact to perform practical tasks and receive feedback on them. Furthermore, this paper provides suggestions for future research using more advanced technologies

    Toward hyper-realistic and interactive social VR experiences in live TV scenarios

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Social Virtual Reality (VR) allows multiple distributed users getting together in shared virtual environments to socially interact and/or collaborate. This article explores the applicability and potential of Social VR in the broadcast sector, focusing on a live TV show use case. For such a purpose, a novel and lightweight Social VR platform is introduced. The platform provides three key outstanding features compared to state-of-the-art solutions. First, it allows a real-time integration of remote users in shared virtual environments, using realistic volumetric representations and affordable capturing systems, thus not relying on the use of synthetic avatars. Second, it supports a seamless and rich integration of heterogeneous media formats, including 3D scenarios, dynamic volumetric representation of users and (live/stored) stereoscopic 2D and 180º/360º videos. Third, it enables low-latency interaction between the volumetric users and a video-based presenter (Chroma keying), and a dynamic control of the media playout to adapt to the session’s evolution. The production process of an immersive TV show to be able to evaluate the experience is also described. On the one hand, the results from objective tests show the satisfactory performance of the platform. On the other hand, the promising results from user tests support the potential impact of the presented platform, opening up new opportunities in the broadcast sector, among others.This work has been partially funded by the European Union’s Horizon 2020 program, under agreement nº 762111 (VRTogether project), and partially by ACCIÓ, under agreement COMRDI18-1-0008 (ViVIM project). Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – Incorporación grant (reference IJCI-2017-34611). The authors would also like to thank the EU H2020 VRTogether project consortium for their relevant and valuable contributions.Peer ReviewedPostprint (author's final draft

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Literacy for digital futures : Mind, body, text

    Get PDF
    The unprecedented rate of global, technological, and societal change calls for a radical, new understanding of literacy. This book offers a nuanced framework for making sense of literacy by addressing knowledge as contextualised, embodied, multimodal, and digitally mediated. In today’s world of technological breakthroughs, social shifts, and rapid changes to the educational landscape, literacy can no longer be understood through established curriculum and static text structures. To prepare teachers, scholars, and researchers for the digital future, the book is organised around three themes – Mind and Materiality; Body and Senses; and Texts and Digital Semiotics – to shape readers’ understanding of literacy. Opening up new interdisciplinary themes, Mills, Unsworth, and Scholes confront emerging issues for next-generation digital literacy practices. The volume helps new and established researchers rethink dynamic changes in the materiality of texts and their implications for the mind and body, and features recommendations for educational and professional practice
    • …
    corecore