2,616 research outputs found

    VeLight:A 3D virtual reality tool for CT-based anatomy teaching and training

    Get PDF
    Abstract: For doctors and other medical professionals, the human body is the focus of their daily practice. A solid understanding of how it is built up, that is, the anatomy of the human body, is essential to ensure safe medical practice. Current anatomy education takes place either using text books or via dissecting human cadavers, with text books being the most traditional way to learn anatomy due to the cost of the alternatives. However, printed media offer only a 2D perception of a part of the human body. Although dissection of human cadavers can give a more direct observation and interaction with human bodies, it is extremely costly because of the need of preserving human bodies and maintaining dissection rooms. To solve this issue, we developed VeLight, a system with which students can learn anatomy based on CT datasets using a 3D Virtual Reality display (zSpace). VeLight offers simple and intuitive interactions, and allows teachers to design their own courses using their own material. The system offers an interactive, depth-perceptive learning experience and improves the learning process. We conducted an informal user study to validate the effectiveness of VeLight. The results show that participants were able to learn and remember how to work with VeLight very quickly. All participants reported enthusiasm for the potential of VeLight in the domain of medical education. Graphic Abstract: [Figure not available: see fulltext.

    From Industry to Practice: Can Users Tackle Domain Tasks with Augmented Reality?

    Get PDF
    Augmented Reality (AR) is a cutting-edge interactive technology. While Virtual Reality (VR) is based on completely virtual and immersive environments, AR superimposes virtual objects onto the real world. The value of AR has been demonstrated and applied within numerous industrial application areas due to its capability of providing interactive interfaces of visualized digital content. AR can provide functional tools that support users in undertaking domain-related tasks, especially facilitating them in data visualization and interaction by jointly augmenting physical space and user perception. Making effective use of the advantages of AR, especially the ability which augment human vision to help users perform different domain-related tasks is the central part of my PhD research.Industrial process tomography (IPT), as a non-intrusive and commonly-used imaging technique, has been effectively harnessed in many manufacturing components for inspections, monitoring, product quality control, and safety issues. IPT underpins and facilitates the extraction of qualitative and quantitative data regarding the related industrial processes, which is usually visualized in various ways for users to understand its nature, measure the critical process characteristics, and implement process control in a complete feedback network. The adoption of AR in benefiting IPT and its related fields is currently still scarce, resulting in a gap between AR technique and industrial applications. This thesis establishes a bridge between AR practitioners and IPT users by accomplishing four stages. First of these is a need-finding study of how IPT users can harness AR technique was developed. Second, a conceptualized AR framework, together with the implemented mobile AR application developed in an optical see-through (OST) head-mounted display (HMD) was proposed. Third, the complete approach for IPT users interacting with tomographic visualizations as well as the user study was investigated.Based on the shared technologies from industry, we propose and examine an AR approach for visual search tasks providing visual hints, audio hints, and gaze-assisted instant post-task feedback as the fourth stage. The target case was a book-searching task, in which we aimed to explore the effect of the hints and the feedback with two hypotheses: that both visual and audio hints can positively affect AR search tasks whilst the combination outperforms the individuals; that instant post-task feedback can positively affect AR search tasks. The proof-of-concept was demonstrated by an AR app in an HMD with a two-stage user evaluation. The first one was a pilot study (n=8) where the impact of the visual hint in benefiting search task performance was identified. The second was a comprehensive user study (n=96) consisting of two sub-studies, Study I (n=48) and Study II (n=48). Following quantitative and qualitative analysis, our results partially verified the first hypothesis and completely verified the second, enabling us to conclude that the synthesis of visual and audio hints conditionally improves AR search task efficiency when coupled with task feedback

    RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching

    Full text link
    We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.Comment: UIST 202

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    Design Patterns for Situated Visualization in Augmented Reality

    Full text link
    Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.Comment: To appear in IEEE VIS 202

    Exploration of smart infrastructure for drivers of autonomous vehicles

    Get PDF
    The connection between vehicles and infrastructure is an integral part of providing autonomous vehicles information about the environment. Autonomous vehicles need to be safe and users need to trust their driving decision. When smart infrastructure information is integrated into the vehicle, the driver needs to be informed in an understandable manner what the smart infrastructure detected. Nevertheless, interactions that benefit from smart infrastructure have not been the focus of research, leading to knowledge gaps in the integration of smart infrastructure information in the vehicle. For example, it is unclear, how the information from two complex systems can be presented, and if decisions are made, how these can be explained. Enriching the data of vehicles with information from the infrastructure opens unexplored opportunities. Smart infrastructure provides vehicles with information to predict traffic flow and traffic events. Additionally, it has information about traffic events in several kilometers distance and thus enables a look ahead on a traffic situation, which is not in the immediate view of drivers. We argue that this smart infrastructure information can be used to enhance the driving experience. To achieve this, we explore designing novel interactions, providing warnings and visualizations about information that is out of the view of the driver, and offering explanations for the cause of changed driving behavior of the vehicle. This thesis focuses on exploring the possibilities of smart infrastructure information with a focus on the highway. The first part establishes a design space for 3D in-car augmented reality applications that profit from smart infrastructure information. Through the input of two focus groups and a literature review, use cases are investigated that can be introduced in the vehicle's interaction interface which, among others, rely on environment information. From those, a design space that can be used to design novel in-car applications is derived. The second part explores out-of-view visualizations before and during take over requests to increase situation awareness. With three studies, different visualizations for out-of-view information are implemented in 2D, stereoscopic 3D, and augmented reality. Our results show that visualizations improve the situation awareness about critical events in larger distances during take over request situations. In the third part, explanations are designed for situations in which the vehicle drives unexpectedly due to unknown reasons. Since smart infrastructure could provide connected vehicles with out-of-view or cloud information, the driving maneuver of the vehicle might remain unclear to the driver. Therefore, we explore the needs of drivers in those situations and derive design recommendations for an interface which displays the cause for the unexpected driving behavior. This thesis answers questions about the integration of environment information in vehicles'. Three important aspects are explored, which are essential to consider when implementing use cases with smart infrastructure in mind. It enables to design novel interactions, provides insights on how out-of-view visualizations can improve the drivers' situation awareness and explores unexpected driving situations and the design of explanations for them. Overall, we have shown how infrastructure and connected vehicle information can be introduced in vehicles' user interface and how new technology such as augmented reality glasses can be used to improve the driver's perception of the environment.Autonome Fahrzeuge werden immer mehr in den alltäglichen Verkehr integriert. Die Verbindung von Fahrzeugen mit der Infrastruktur ist ein wesentlicher Bestandteil der Bereitstellung von Umgebungsinformationen in autonome Fahrzeugen. Die Erweiterung der Fahrzeugdaten mit Informationen der Infrastruktur eröffnet ungeahnte Möglichkeiten. Intelligente Infrastruktur übermittelt verbundenen Fahrzeugen Informationen über den prädizierten Verkehrsfluss und Verkehrsereignisse. Zusätzlich können Verkehrsgeschehen in mehreren Kilometern Entfernung übermittelt werden, wodurch ein Vorausblick auf einen Bereich ermöglicht wird, der für den Fahrer nicht unmittelbar sichtbar ist. Mit dieser Dissertation wird gezeigt, dass Informationen der intelligenten Infrastruktur benutzt werden können, um das Fahrerlebnis zu verbessern. Dies kann erreicht werden, indem innovative Interaktionen gestaltet werden, Warnungen und Visualisierungen über Geschehnisse außerhalb des Sichtfelds des Fahrers vermittelt werden und indem Erklärungen über den Grund eines veränderten Fahrzeugverhaltens untersucht werden. Interaktionen, welche von intelligenter Infrastruktur profitieren, waren jedoch bisher nicht im Fokus der Forschung. Dies führt zu Wissenslücken bezüglich der Integration von intelligenter Infrastruktur in das Fahrzeug. Diese Dissertation exploriert die Möglichkeiten intelligenter Infrastruktur, mit einem Fokus auf die Autobahn. Der erste Teil erstellt einen Design Space für Anwendungen von augmentierter Realität (AR) in 3D innerhalb des Autos, die unter anderem von Informationen intelligenter Infrastruktur profitieren. Durch das Ergebnis mehrerer Studien werden Anwendungsfälle in einem Katalog gesammelt, welche in die Interaktionsschnittstelle des Autos einfließen können. Diese Anwendungsfälle bauen unter anderem auf Umgebungsinformationen. Aufgrund dieser Anwendungen wird der Design Space entwickelt, mit Hilfe dessen neuartige Anwendungen für den Fahrzeuginnenraum entwickelt werden können. Der zweite Teil exploriert Visualisierungen für Verkehrssituationen, die außerhalb des Sichtfelds des Fahrers sind. Es wird untersucht, ob durch diese Visualisierungen der Fahrer besser auf ein potentielles Übernahmeszenario vorbereitet wird. Durch mehrere Studien wurden verschiedene Visualisierungen in 2D, stereoskopisches 3D und augmentierter Realität implementiert, die Szenen außerhalb des Sichtfelds des Fahrers darstellen. Diese Visualisierungen verbessern das Situationsbewusstsein über kritische Szenarien in einiger Entfernung während eines Übernahmeszenarios. Im dritten Teil werden Erklärungen für Situationen gestaltet, in welchen das Fahrzeug ein unerwartetes Fahrmanöver ausführt. Der Grund des Fahrmanövers ist dem Fahrer dabei unbekannt. Mit intelligenter Infrastruktur verbundene Fahrzeuge erhalten Informationen, die außerhalb des Sichtfelds des Fahrers liegen oder von der Cloud bereit gestellt werden. Dadurch könnte der Grund für das unerwartete Fahrverhalten unklar für den Fahrer sein. Daher werden die Bedürfnisse des Fahrers in diesen Situationen erforscht und Empfehlungen für die Gestaltung einer Schnittstelle, die Erklärungen für das unerwartete Fahrverhalten zur Verfügung stellt, abgeleitet. Zusammenfassend wird gezeigt wie Daten der Infrastruktur und Informationen von verbundenen Fahrzeugen in die Nutzerschnittstelle des Fahrzeugs implementiert werden können. Zudem wird aufgezeigt, wie innovative Technologien wie AR Brillen, die Wahrnehmung der Umgebung des Fahrers verbessern können. Durch diese Dissertation werden Fragen über Anwendungsfälle für die Integration von Umgebungsinformationen in Fahrzeugen beantwortet. Drei wichtige Themengebiete wurden untersucht, welche bei der Betrachtung von Anwendungsfällen der intelligenten Infrastruktur essentiell sind. Durch diese Arbeit wird die Gestaltung innovativer Interaktionen ermöglicht, Einblicke in Visualisierungen von Informationen außerhalb des Sichtfelds des Fahrers gegeben und es wird untersucht, wie Erklärungen für unerwartete Fahrsituationen gestaltet werden können

    Augmented Reality Visualization for Image-Guided Surgery:A Validation Study Using a Three-Dimensional Printed Phantom

    Get PDF
    Background Oral and maxillofacial surgery currently relies on virtual surgery planning based on image data (CT, MM). Three-dimensional (3D) visualizations are typically used to plan and predict the outcome of complex surgical procedures. To translate the virtual surgical plan to the operating room, it is either converted into physical 3D-printed guides or directly translated using real-time navigation systems. Purpose This study aims to improve the translation of the virtual surgery plan to a surgical procedure, such as oncologic or trauma surgery, in terms of accuracy and speed. Here we report an augmented reality visualization technique for image-guided surgery. It describes how surgeons can visualize and interact with the virtual surgery plan and navigation data while in the operating room. The user friendliness and usability is objectified by a formal user study that compared our augmented reality assisted technique to the gold standard setup of a perioperative navigation system (Brainlab). Moreover, accuracy of typical navigation tasks as reaching landmarks and following trajectories is compared. Results Overall completion time of navigation tasks was 1.71 times faster using augmented reality (P = .034). Accuracy improved significantly using augmented reality (P < .001), for reaching physical landmarks a less strong correlation was found (P = .087). Although the participants were relatively unfamiliar with VR/AR (rated 2.25/5) and gesture-based interaction (rated 2/5), they reported that navigation tasks become easier to perform using augmented reality (difficulty Brainlab rated 3.25/5, HoloLens 2.4/5). Conclusion The proposed workflow can be used in a wide range of image-guided surgery procedures as an addition to existing verified image guidance systems. Results of this user study imply that our technique enables typical navigation tasks to be performed faster and more accurately compared to the current gold standard. In addition, qualitative feedback on our augmented reality assisted technique was more positive compared to the standard setup. (C) 2021 The Author. Published by Elsevier Inc. on behalf of The American Association of Oral and Maxillofacial Surgeons

    RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios

    Full text link
    Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them occlusion-free while keeping visual linkings legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-LABEL, a deep reinforcement learning-based method for managing the placement of AR labels in scenarios involving moving objects. RL-LABEL considers the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. Our experiments on two real-world datasets show that RL-LABEL effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that RL-LABEL excels over the baselines in aiding users to identify, compare, and summarize data on AR labels within dynamic scenes
    corecore