10 research outputs found
Recommended from our members
Exploring Engineering Applications of Visual Analytics in Virtual Reality
Recent advancements and technological breakthroughs in the development of so-called immersive interfaces, such as augmented (AR), mixed (MR), and virtual reality (VR), coupled with the growing mass-market adoption of such devices has started to attract attention from academia and industry alike. Out of these technologies, VR offers the most mature option in terms of both hardware and software, as well as the best available range of different off-the-shelf offerings. VR is a term interchangeably used to denote both head-mounted displays (HMDs) and fully immersive, bespoke 3D environments which these devices transport their users to. With modern devices, developers can leverage a range of different interaction modalities, including visual, audio, and even haptic feedback, in the creation of these virtual worlds. With such a rich interaction space it is thus natural to think of VR as a well-suited environment for interactive visualisation and analytical reasoning of complex multidimensional data.
Research in \textit{visual analytics} (VA) combines these two themes, spanning the last one and a half decades, and has revealed a number of research findings. This includes a range of new advanced and effective visualisation and analysis tools for even more complex, more noisy and larger data sets. Furthermore, the extension of this research and the use of immersive interfaces to facilitate visual analytics has spun-off a new field of research: \textit{immersive analytics} (IA). Immersive analytics leverages the potential bestowed by immersive interfaces to aid the user in swift and effective data analysis.
Some of the most promising application domains of such immersive interfaces in the industry are various branches of engineering, including aerospace design and in civil engineering. The range of potential applications is vast and growing as new stakeholders are adopting these immersive tools. However, the use of these technologies brings its own challenges. One such difficulty is the design of appropriate interaction techniques. There is no optimal choice, instead such a choice varies depending on available hardware, the user’s prior experience, their task at hand, and the nature of the dataset.
To this end, my PhD work has focused on designing and analysing various interactive, VR-based immersive systems for engineering visual analytics. One of the key elements of such an immersive system is the selection of an adequate interaction method. In a series of both qualitative and quantitative studies, I have explored the potential of various interaction techniques that can be used to support the user in swift and effective data analysis.
Here, I have investigated the feasibility of using techniques such as hand-held controllers, gaze-tracking and hand-tracking input methods used solo or in combination in various challenging use cases and scenarios. For instance, I developed and verified the usability and effectiveness of the AeroVR system for aerospace design in VR. This research has allowed me to trim the very large design space of such systems that have been not sufficiently explored thus far. Moreover, building on top of this work, I have designed, developed, and tested a system for digital twin assessment in aerospace that coupled gaze-tracking and hand-tracking, achieved via an additional sensor attached to the front of the VR headset, with no need for the user to hold a controller. The analysis of the results obtained from a qualitative study with domain experts allowed me to distill and propose design implications when developing similar systems. Furthermore, I worked towards designing an effective VR-based visualisation of complex, multidimensional abstract datasets. Here, I developed and evaluated the immersive version of the well-known Parallel Coordinates Plots (IPCP) visualisation technique. The results of the series of qualitative user studies allowed me to obtain a list of design suggestions for IPCP, as well as provide tentative evidence that the IPCP can be an effective tool for multidimensional data analysis. Lastly, I also worked on the design, development, and verification of the system allowing its users to capture information in the context of conducting engineering surveys in VR.
Furthermore, conducting a meaningful evaluation of immersive analytics interfaces remains an open problem. It is difficult and often not feasible to use traditional A/B comparisons in controlled experiments as the aim of immersive analytics is to provide its users with new insights into their data rather than focusing on more quantifying factors. To this end, I developed a generative process for synthesising clustered datasets for VR analytics experiments that can be used in the process of interface evaluation. I further validated this approach by designing and carrying out two user studies. The statistical analysis of the gathered data revealed that this generative process for synthesising clustered datasets did indeed result in datasets that can be used in experiments without the datasets themselves being the dominant contributor of the variability between conditions.Engineering and Physical Sciences Research Council (EPSRC-1788814); Trinity Hall and Cambridge Commonwealth, European & International Trust; Cambridge Philosophical Societ
Recommended from our members
Research data supporting "Supporting Iterative Virtual Reality Analytics Design and Evaluation by Systematic Generation of Surrogate Clustered Datasets"
Virtual Reality (VR) is a promising technology platform for immersive visual analytics. However, the design space of VR analytics interface design is vast and difficult to explore using traditional A/B comparisons in controlled experiments. One factor that complicates such comparisons is the dataset. Exposing participants to the same dataset in all conditions introduces a learning effect. On the other hand, using different datasets for all experimental conditions introduces the dataset as an uncontrolled variable. In this paper we propose to rectify this by introducing a generative process for synthesizing clustered datasets for VR analytics experiments. This process generates datasets that are distinct while simultaneously allowing systematic comparisons in experiments. In addition, these datasets can also be used in an iterative design process. In a two-part experiment we demonstrate the validity of the process and demonstrate how new insights in VR-based visual analytics can be gained using synthetic datasets.
Here, we are providing the Python scripts used to generate the datasets used in the above study as well as the six datasets (A, B, C, D, E, F) themselves
Recommended from our members
Towards More Effective VR-Based Presentations of Real-World Assets: Showcasing Mobile MRI to Medical Practitioners and Technicians
The main aim of this research is to examine the benefits of using VR to present mobile MRI. Even though such devices were specifically designed for mobility, it is still often unfeasible to transport them across long distances. Hence, new methods are required that help familiarize potential patients and medical personnel with the advantages and shortcomings of mobile MRI. To do this, we developed a system that transports the user to the VR environment populated with the real-scale MRI model.This research was supported by the project entitled "The product expansion of the `Mobile MRI Clinic` in the Middle East as a pillar of business development for Eurodiagnostic sp. z o.o." under measure 1.2 Internationalisation of SMEs of the Operational Programme Eastern Poland 2014-2020, co-financed by the European Regional Development Fund
Recommended from our members
Towards Augmented Reality Guiding Systems: An Engineering Design of an Immersive System for Complex 3D Printing Repair Process
Over the past decade, additive manufacturing (AM) has become widely adopted for both prototyping and, more recently, end-use products. In particular, fused deposition modeling (FDM) is the most widespread form of additive manufacturing due to its low cost, ease of use, and versatility. While additive processes are relatively automated, many steps in their operation and repair require trained human operators. Finding such operators can be difficult, as highlighted during the recent COVID-19 pandemic. Augmented reality (AR) systems could significantly help address this challenge by automating the training for 3D printer operators. Given multidimensional design choices, however, a research gap exists in the system requirements for such immersive guidance. To address this need, we explore the applicability of AR to guide users through a repair process. In that context, we report on the system design as well as the results of the AR system assessment in a qualitative study with experts.EPSRC grant no. EP/V062123/1, Made Smarter Innovation - Research Centre for Connected Factorie
Recommended from our members
Real-Time Onboard Object Detection for Augmented Reality: Enhancing Head-Mounted Display with YOLOv8
This paper introduces a software architecture for real-time object detection using machine learning (ML) in an augmented reality (AR) environment. Our approach uses the recent state-of-the-art YOLOv8 network that runs onboard on the Microsoft HoloLens 2 head-mounted display (HMD). The primary motivation behind this research is to enable the application of advanced ML models for enhanced perception and situational awareness with a wearable, hands-free AR platform. We show the image processing pipeline for the YOLOv8 model and the techniques used to make it real-time on the resource-limited edge computing platform of the headset. The experimental results demonstrate that our solution achieves real-time processing without needing offloading tasks to the cloud or any other external servers while retaining satisfactory accuracy regarding the usual mAP metric and measured qualitative performance.The study has been supported by funding provided through an unrestricted gift by Meta
Recommended from our members
Measurement and Inspection of Photo-Realistic 3-D VR Models.
Recent advancements in virtual reality (VR) may help unlock the full potential offered by 3-D photorealistic models generated using state-of-the-art photogrammetric methods. Using VR to carry out analyses on photogrammetric models has the potential to assist the user in performing basic offline engineering inspection of digital twins-digitized representations of real-world objects and structures. However, for such benefits to materialize, it is necessary to create suitable interactive systems for working with photogrammetric models in VR. To this end, this article presents PhotoTwinVR-an immersive gesture-controlled system for manipulation and inspection of 3-D photogrammetric models of physical objects in VR. An observational study with three domain experts validates the feasibility of the system design for practical use-cases involving offline inspections of pipelines and other 3-D structures
HyperPocket: Generative Point Cloud Completion
Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an object's hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.This research was funded by the Priority Research Area Digiworld under the program Excellence Initiative – Research University at the Jagiellonian University in Kraków
Recommended from our members
Towards Multimodal VR Trainer of Voice Emission and Public Speaking: Work-in-Progress
GlossoVR is a virtual reality (VR) application that combines training in public speaking in front of a virtual audience and in voice emission in relaxation exercises. It is accompanied by digital signal processing (DSP) and artificial intelligence (AI) modules which provide automatic feedback on the vocal performance as well as the behavior and psychophysiology of the user. In particular, we address parameters of speech emotions, prosody and timbre, and the user's hand gestures and eye movement. The prototype is in the proof of concept phase, and we are developing it in accordance with the user-centered design paradigm. In this article reports the work in progress, focusing on the approaches, datasets and algorithms applied in the current state of the glossoVR project.This work was supported by program Lider grant no 0230/L-11/2019 by the National Center for Research and Development, Poland