178,653 research outputs found

    Object Based Augmented Reality Case Study- Literature Survey on Application based approach towards Augmented Reality

    Get PDF
    This paper is about Augmented Reality (AR) using object-based visualization and implementation on the smartphone devices. Augmented Reality (AR) employs computer vision, image processing and computer graphics techniques to merge digital content into the real world. It enables real-time interaction between the user, real objects and virtual objects. AR can, for example, be used to embed 2D graphics into a video in such a way as if the virtual elements were part of the real environment. In this work, we are designing AR based software in which we are solving the problem for ease of access of documents on check post. One of the challenges of AR is to align virtual data with the environment. A marker-based approach solves the problem using visual markers, e.g. 2D barcodes, detectable with computer vision methods

    A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays

    Get PDF
    We identify usability challenges facing consumers adopting Virtual Reality (VR) head-mounted displays (HMDs) in a survey of 108 VR HMD users. Users reported significant issues in interacting with, and being aware of their real-world context when using a HMD. Building upon existing work on blending real and virtual environments, we performed three design studies to address these usability concerns. In a typing study, we show that augmenting VR with a view of reality significantly corrected the performance impairment of typing in VR. We then investigated how much reality should be incorporated and when, so as to preserve usersā€™ sense of presence in VR. For interaction with objects and peripherals, we found that selectively presenting reality as users engaged with it was optimal in terms of performance and usersā€™ sense of presence. Finally, we investigated how this selective, engagement-dependent approach could be applied in social environments, to support the userā€™s awareness of the proximity and presence of others

    Natural Physical Interaction Between Real and Virtual Objects in Augmented Reality Systems

    Get PDF
    In this paper, we present an method for implementing natural, real object-like physical interaction between real world objects and augmented virtual objects in Augmented Reality (AR) systems. First, we implemented physical interaction between virtual objects and the surrounding real world environment, which most AR contents lack, by reconstructing the detailed geometry of real world scenes. Second, we simulated collision response between pairs of colliding real and virtual objects using the corresponding premeasured coefficient of restitution (COR) to consider the differences of COR between different collision pairs. In addition, occlusion and shadowing between real and virtual objects was also implemented to prevent the other interactions from looking unnatural. User evaluation results show that our method was able to reproduce interaction between real and virtual objects which test subjects felt was natural for virtual objects representing real objects which has a wide-varying COR value against each collision

    Deep Learning Development Environment in Virtual Reality

    Full text link
    Virtual reality (VR) offers immersive visualization and intuitive interaction. We leverage VR to enable any biomedical professional to deploy a deep learning (DL) model for image classification. While DL models can be powerful tools for data analysis, they are also challenging to understand and develop. To make deep learning more accessible and intuitive, we have built a virtual reality-based DL development environment. Within our environment, the user can move tangible objects to construct a neural network only using their hands. Our software automatically translates these configurations into a trainable model and then reports its resulting accuracy on a test dataset in real-time. Furthermore, we have enriched the virtual objects with visualizations of the model's components such that users can achieve insight about the DL models that they are developing. With this approach, we bridge the gap between professionals in different fields of expertise while offering a novel perspective for model analysis and data interaction. We further suggest that techniques of development and visualization in deep learning can benefit by integrating virtual reality

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Investigating Real-time Touchless Hand Interaction and Machine Learning Agents in Immersive Learning Environments

    Get PDF
    The recent surge in the adoption of new technologies and innovations in connectivity, interaction technology, and artificial realities can fundamentally change the digital world. eXtended Reality (XR), with its potential to bridge the virtual and real environments, creates new possibilities to develop more engaging and productive learning experiences. Evidence is emerging that thissophisticated technology offers new ways to improve the learning process for better student interaction and engagement. Recently, immersive technology has garnered much attention as an interactive technology that facilitates direct interaction with virtual objects in the real world. Furthermore, these virtual objects can be surrogates for real-world teaching resources, allowing for virtual labs. Thus XR could enable learning experiences that would not bepossible in impoverished educational systems worldwide. Interestingly, concepts such as virtual hand interaction and techniques such as machine learning are still not widely investigated in immersive learning. Hand interaction technologies in virtual environments can support the kinesthetic learning pedagogical approach, and the need for its touchless interaction nature hasincreased exceptionally in the post-COVID world. By implementing and evaluating real-time hand interaction technology for kinesthetic learning and machine learning agents for self-guided learning, this research has addressed these underutilized technologies to demonstrate the efficiency of immersive learning. This thesis has explored different hand-tracking APIs and devices to integrate real-time hand interaction techniques. These hand interaction techniques and integrated machine learning agents using reinforcement learning are evaluated with different display devices to test compatibility. The proposed approach aims to provide self-guided, more productive, and interactive learning experiences. Further, this research has investigated ethics, privacy, and security issues in XR and covered the future of immersive learning in the Metaverse.<br/

    Remixing real and imaginary in art education with fully immersive virtual reality

    Get PDF
    This article explores digital material/ism by examining student teachersā€™ experiences, processes and products with fully immersive virtual reality (VR) as part of visual art education. The students created and painted a virtual world, given the name Gretan puutarha (ā€˜Gretaā€™s Gardenā€™), using the Google application Tilt Brush. They also applied photogrammetry techniques to scan 3D objects from the real world in order to create 3D models for their VR world. Additionally, they imported 2D photographs and drawings along with applied animated effects to construct their VR world digitally, thereby remixing elements from real life and fantasy. The students were asked open-ended questions to find out how they created art virtually and the results were analysed using Burdeaā€™s VR concepts of immersion, interaction and imagination. Digital material was created intersubjectively and intermedially while it was also remixed with real and imaginary. Various webs of meanings were created, both intertextual and rhizomatic in nature.This article explores digital material/ism by examining student teachersā€™ experiences, processes and products with fully immersive virtual reality (VR) as part of visual art education. The students created and painted a virtual world, given the name Gretan puutarha (ā€˜Gretaā€™s Gardenā€™), using the Google application Tilt Brush. They also applied photogrammetry techniques to scan 3D objects from the real world in order to create 3D models for their VR world. Additionally, they imported 2D photographs and drawings along with applied animated effects to construct their VR world digitally, thereby remixing elements from real life and fantasy. The students were asked open-ended questions to find out how they created art virtually and the results were analysed using Burdeaā€™s VR concepts of immersion, interaction and imagination. Digital material was created intersubjectively and intermedially while it was also remixed with real and imaginary. Various webs of meanings were created, both intertextual and rhizomatic in nature.Peer reviewe

    An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

    Get PDF
    This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces

    Automatic generation of consistent shadows for Augmented Reality

    Get PDF
    Sponsor : CHCCS The Canadian Human-Computer Communications SocietyInternational audienceIn the context of mixed reality, it is difficult to simulate shadow interaction between real and virtual objects when only an approximate geometry of the real scene and the light source is known. In this paper, we present a realtime rendering solution to simulate colour-consistent virtual shadows in a real scene. The rendering consists of a three-step mechanism: shadow detection, shadow protection and shadow generation. In the shadow detection step, the shadows due to real objects are automatically identified using the texture information and an initial estimate of the shadow region. In the next step, a protection mask is created to prevent further rendering in those shadow regions. Finally, the virtual shadows are generated using shadow volumes and a pre-defined scaling factor that adapts the intensity of the virtual shadows to the real shadow. The procedure detects and generates shadows in real time, consistent with those already present in the scene and offers an automatic and real-time solution for common illumination, suitable for augmented reality
    • ā€¦
    corecore