42 research outputs found

    Virtual Reality and Spatial Augmented Reality for Social Inclusion: The “Includiamoci” Project

    Get PDF
    Extended Reality (XR) technology represents an innovative tool to address the challenges of the present, as it allows for experimentation with new solutions in terms of content creation and its fruition by different types of users. The potential to modulate the experience based on the target audience’s needs and the project’s objectives makes XR suitable for creating new accessibility solutions. The “Includiamoci” project was carried out with the aim of creating workshops on social inclusion through the combination of art and technology. Specifically, the experimentation involved ten young people between the ages of 28 and 50, with cognitive disabilities, who participated in Extended Reality workshops and Art Therapy workshops. In the course of these activities, the outputs obtained were two: a virtual museum, populated by the participants’ works, and a digital set design for a theatrical performance. Through two tests, one on user experience (UX) and one on the degree of well-being, the effectiveness of the entire project was evaluated. In conclusion, the project demonstrated how the adopted solutions were appropriate to the objectives, increasing our knowledge of UX for a target audience with specific user needs and using XR in the context of social inclusion

    A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom

    Get PDF
    Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures

    The role of camera convergence in stereoscopic video see-through augmented reality displays

    Get PDF
    In the realm of wearable augmented reality (AR) systems, stereoscopic video see-through displays raise issues related to the user's perception of the three-dimensional space. This paper seeks to put forward few considerations regarding the perceptual artefacts common to standard stereoscopic video see-through displays with fixed camera convergence. Among the possible perceptual artefacts, the most significant one relates to diplopia arising from reduced stereo overlaps and too large screen disparities. Two state-of-the-art solutions are reviewed. The first one suggests a dynamic change, via software, of the virtual camera convergence, whereas the second one suggests a matched hardware/software solution based on a series of predefined focus/vergence configurations. Potentialities and limits of both the solutions are outlined so as to provide the AR community, a yardstick for developing new stereoscopic video see-through systems suitable for different working distances

    Augmented reality applications in the automotive industry

    Get PDF
    This research paper examines the implications and transformative capabilities of Augmented Reality (AR) within the automotive landscape. We examine how AR catalyses radical changes across various automotive functions, from design and manufacturing to customer engagement and vehicle operation. While AR is poised to elevate operational efficiencies significantly, it also presents challenges, such as AR content authoring complexity and hardware constraints that restrict mass consumer adoption. This paper surveys the current AR state-of-the-art technology, systems and pivotal automotive applications. The primary objective is to offer a thorough comprehension of the underlying concepts, methodologies, and applications that underpin the integration of AR within the automotive industry. It concludes by discussing potential barriers to AR implementation and outlines future avenues for research, including software scalability and the integration of cloud and edge computing to alleviate device limitations

    Perspective Preserving Solution for Quasi-Orthoscopic Video See-Through HMDs

    Get PDF
    In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models

    Building trust in autonomous vehicles: Role of virtual reality driving simulators in HMI design

    Get PDF
    The investigation of factors contributing at making humans trust Autonomous Vehicles (AVs) will play a fundamental role in the adoption of such technology. The user's ability to form a mental model of the AV, which is crucial to establish trust, depends on effective user-vehicle communication; thus, the importance of Human-Machine Interaction (HMI) is poised to increase. In this work, we propose a methodology to validate the user experience in AVs based on continuous, objective information gathered from physiological signals, while the user is immersed in a Virtual Reality-based driving simulation. We applied this methodology to the design of a head-up display interface delivering visual cues about the vehicle' sensory and planning systems. Through this approach, we obtained qualitative and quantitative evidence that a complete picture of the vehicle's surrounding, despite the higher cognitive load, is conducive to a less stressful experience. Moreover, after having been exposed to a more informative interface, users involved in the study were also more willing to test a real AV. The proposed methodology could be extended by adjusting the simulation environment, the HMI and/or the vehicle's Artificial Intelligence modules to dig into other aspects of the user experience

    Large-scale environment mapping and immersive human-robot interaction for agricultural mobile robot teleoperation

    Full text link
    Remote operation is a crucial solution to problems encountered in agricultural machinery operations. However, traditional video streaming control methods fall short in overcoming the challenges of single perspective views and the inability to obtain 3D information. In light of these issues, our research proposes a large-scale digital map reconstruction and immersive human-machine remote control framework for agricultural scenarios. In our methodology, a DJI unmanned aerial vehicle(UAV) was utilized for data collection, and a novel video segmentation approach based on feature points was introduced. To tackle texture richness variability, an enhanced Structure from Motion (SfM) using superpixel segmentation was implemented. This method integrates the open Multiple View Geometry (openMVG) framework along with Local Features from Transformers (LoFTR). The enhanced SfM results in a point cloud map, which is further processed through Multi-View Stereo (MVS) to generate a complete map model. For control, a closed-loop system utilizing TCP for VR control and positioning of agricultural machinery was introduced. Our system offers a fully visual-based immersive control method, where upon connection to the local area network, operators can utilize VR for immersive remote control. The proposed method enhances both the robustness and convenience of the reconstruction process, thereby significantly facilitating operators in acquiring more comprehensive on-site information and engaging in immersive remote control operations. The code is available at: https://github.com/LiuTao1126/Enhance-SF

    the "Mastroianni Experience"

    Get PDF
    Digital tools for artworks can offer new opportunities and design new approaches for conservation and public outreach. This paper presents the experience carried out by the School of Conservation and Restoration of the University of Urbino to preserve a public collection of wooden artistic models assembled by Umberto Mastroianni and used as a project of original bronze and steel sculptures. The digital documentation supported the maintenance protocol and helped design the restoration work. Moreover, thanks to cutting-edge technologies, a virtual exhibition was created, testing an AR application that allows the public to better appreciate the results of the interventions. Digital storytelling about artistic meaning connected the models to original sculptures placed in various Italian landscapes, offering the public an artistic journey into Urbino's architecture
    corecore