22 research outputs found

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Augmented reality at the workplace : a context-aware assistive system using in-situ projection

    Get PDF
    Augmented Reality has been used for providing assistance during manual assembly tasks for more than 20 years. Due to recent improvements in sensor technology, creating context-aware Augmented Reality systems, which can detect interaction accurately, becomes possible. Additionally, the increasing amount of variants of assembled products and being able to manufacture ordered products on demand, leads to an increasing complexity for assembly tasks at industrial assembly workplaces. The resulting need for cognitive support at workplaces and the availability of robust technology enables us to address real problems by using context-aware Augmented Reality to support workers during assembly tasks. In this thesis, we explore how assistive technology can be used for cognitively supporting workers in manufacturing scenarios. By following a user-centered design process, we identify key requirements for assistive systems for both continuously supporting workers and teaching assembly steps to workers. Thereby, we analyzed three different user groups: inexperienced workers, experienced workers, and workers with cognitive impairments. Based on the identified requirements, we design a general concept for providing cognitive assistance at workplaces which can be applied to multiple scenarios. For applying the proposed concept, we present four prototypes using a combination of in-situ projection and cameras for providing feedback to workers and to sense the workers' interaction with the workplace. Two of the prototypes address a manual assembly scenario and two prototypes address an order picking scenario. For the manual assembly scenario, we apply the concept to a single workplace and an assembly cell, which connects three single assembly workplaces to each other. For the order picking scenario, we present a cart-mounted prototype using in-situ projection to display picking information directly onto the warehouse. Further, we present a user-mounted prototype, exploring the design-dimension of equipping the worker with technology rather than equipping the environment. Besides the system contribution of this thesis, we explore the benefits of the created prototypes through studies with inexperienced workers, experienced workers, and cognitively impaired workers. We show that a contour visualization of in-situ feedback is the most suitable for cognitively impaired workers. Further, these contour instructions enable the cognitively impaired workers to perform assembly tasks with a complexity of up to 96 work steps. For inexperienced workers, we show that a combination of haptic and visual error feedback is appropriate to communicate errors that were made during assembly tasks. For creating interactive instructions, we introduce and evaluate a Programming by Demonstration approach. Investigating the long-term use of in-situ instructions at manual assembly workplaces, we show that instructions adapting to the workers' cognitive needs is beneficial, as continuously presenting instructions has a negative impact on the performance of both experienced and inexperienced workers. In the order picking scenario, we show that the cart-mounted in-situ instructions have a great potential as they outperform the paper-baseline. Finally, the user-mounted prototype results in a lower perceived cognitive load. Over the course of the studies, we recognized the need for a standardized way of evaluating Augmented Reality instructions. To address this issue, we propose the General Assembly Task Model, which provides two standardized baseline tasks and a noise-free way of evaluating Augmented Reality instructions for assembly tasks. Further, based on the experience, we gained from applying our assistive system in real-world assembly scenarios, we identify eight guidelines for designing assistive systems for the workplace. In conclusion, this thesis provides a basis for understanding how in-situ projection can be used for providing cognitive support at workplaces. It identifies the strengths and weaknesses of in-situ projection for cognitive assistance regarding different user groups. Therefore, the findings of this thesis contribute to the field of using Augmented Reality at the workplace. Overall, this thesis shows that using Augmented Reality for cognitively supporting workers during manual assembly tasks and order picking tasks creates a benefit for the workers when working on cognitively demanding tasks.Seit mehr als 20 Jahren wird Augmented Reality eingesetzt, um manuelle MontagetĂ€tigkeiten zu unterstĂŒtzen. Durch neue Entwicklungen in der Sensortechnologie ist es möglich, kontextsensitive Augmented-Reality-Systeme zu bauen, die Interaktionen akkurat erkennen können. Zudem fĂŒhren eine zunehmende Variantenvielfalt und die Möglichkeit, bestellte Produkte erst auf Nachfrage zu produzieren, zu einer zunehmenden KomplexitĂ€t an MontagearbeitsplĂ€tzen. Der daraus entstehende Bedarf fĂŒr kognitive UnterstĂŒtzung an ArbeitsplĂ€tzen und die VerfĂŒgbarkeit von robuster Technologie lĂ€sst uns bestehende Probleme lösen, indem wir Arbeitende wĂ€hrend Montagearbeiten mithilfe von kontextsensitiver Augmented Reality unterstĂŒtzen. In dieser Arbeit erforschen wir, wie Assistenztechnologie eingesetzt werden kann, um Arbeitende in Produktionsszenarien kognitiv zu unterstĂŒtzen. Mithilfe des User-Centered-Design-Prozess identifizieren wir SchlĂŒsselanforderungen fĂŒr Assistenzsysteme, die sowohl Arbeitende kontinuierlich unterstĂŒtzen als auch Arbeitenden Arbeitsschritte beibringen können. Dabei betrachten wir drei verschiedene Benutzergruppen: unerfahrene Arbeitende, erfahrene Arbeitende, und Arbeitende mit kognitiven Behinderungen. Auf Basis der erarbeiteten SchlĂŒsselanforderungen entwerfen wir ein allgemeines Konzept fĂŒr die Bereitstellung von kognitiver Assistenz an ArbeitsplĂ€tzen, welches in verschiedenen Szenarien angewandt werden kann. Wir prĂ€sentieren vier verschiedene Prototypen, in denen das vorgeschlagene Konzept implementiert wurde. FĂŒr die Prototypen verwenden wir eine Kombination von In-Situ-Projektion und Kameras, um Arbeitenden Feedback anzuzeigen und die Interaktionen der Arbeitenden am Arbeitsplatz zu erkennen. Zwei der Prototypen zielen auf ein manuelles Montageszenario ab, und zwei weitere Prototypen zielen auf ein Kommissionierszenario ab. Im manuellen Montageszenario wenden wir das Konzept an einem Einzelarbeitsplatz und einer Montagezelle, welche drei EinzelarbeitsplĂ€tze miteinander verbindet, an. Im Kommissionierszenario prĂ€sentieren wir einen Kommissionierwagen, der mithilfe von In-Situ-Projektion Informationen direkt ins Lager projiziert. Des Weiteren prĂ€sentieren wir einen tragbaren Prototypen, der anstatt der Umgebung den Arbeitenden mit Technologie ausstattet. Ein weiterer Beitrag dieser Arbeit ist die Erforschung der Vorteile der erstellten Prototypen durch Benutzerstudien mit erfahrenen Arbeitenden, unerfahrenen Arbeitenden und Arbeitende mit kognitiver Behinderung. Wir zeigen, dass eine Kontur-Visualisierung von In-Situ-Anleitungen die geeignetste Anleitungsform fĂŒr Arbeitende mit kognitiven Behinderungen ist. Des Weiteren befĂ€higen Kontur-basierte Anleitungen Arbeitende mit kognitiver Behinderung, an komplexeren Aufgaben zu arbeiten, welche bis zu 96 Arbeitsschritte beinhalten können. FĂŒr unerfahrene Arbeitende zeigen wir, dass sich eine Kombination von haptischem und visuellem Fehlerfeedback bewĂ€hrt hat. Wir stellen einen Ansatz vor, der eine Programmierung von interaktiven Anleitungen durch Demonstration zulĂ€sst, und evaluieren ihn. BezĂŒglich der Langzeitwirkung von In-Situ-Anleitungen an manuellen MontagearbeitsplĂ€tzen zeigen wir, dass Anleitungen, die sich den kognitiven BedĂŒrfnissen der Arbeitenden anpassen, geeignet sind, da ein kontinuierliches PrĂ€sentieren von Anleitungen einen negativen Einfluss auf die Arbeitsgeschwindigkeit von erfahrenen Arbeitenden sowohl als auch unerfahrenen Arbeitenden hat. FĂŒr das Szenario der Kommissionierung zeigen wir, dass die In-Situ-Anleitungen des Kommissionierwagens ein großes Potenzial haben, da sie zu einer schnelleren Arbeitsgeschwindigkeit fĂŒhren als traditionelle Papieranleitungen. Schlussendlich fĂŒhrt der tragbare Prototyp zu einer subjektiv niedrigeren kognitiven Last. WĂ€hrend der DurchfĂŒhrung der Studien haben wir den Bedarf einer standardisierten Evaluierungsmethode von Augmented-Reality-Anleitungen erkannt. Deshalb schlagen wir das General Assembly Task Modell vor, welches zwei standardisierte Grundaufgaben und eine Methode zur störungsfreien Analyse von Augmented-Reality-Anleitungen fĂŒr Montagearbeiten bereitstellt. Des Weiteren stellen wir auf Basis unserer Erfahrungen, die wir durch die Anwendung unseres Assistenzsystems in Montageszenarien gemacht haben, acht Richtlinien fĂŒr das Gestalten von Montageassistenzsystemen vor. Zusammenfassend bietet diese Arbeit eine Basis fĂŒr das VerstĂ€ndnis der Benutzung von In-Situ-Projektion zur Bereitstellung von kognitiver Montageassistenz. Diese Arbeit identifiziert die StĂ€rken und SchwĂ€chen von In-Situ-Projektion fĂŒr die kognitive UnterstĂŒtzung verschiedener Benutzergruppen. Folglich tragen die Resultate dieser Arbeit zum Feld der Benutzung von Augmented Reality an ArbeitsplĂ€tzen bei. Insgesamt zeigt diese Arbeit, dass die Benutzung von Augmented Reality fĂŒr die kognitive UnterstĂŒtzung von Arbeitenden wĂ€hrend kognitiv anspruchsvoller manueller MontagetĂ€tigkeiten und KommissioniertĂ€tigkeiten zu einer schnelleren Arbeitsgeschwindigkeit fĂŒhrt

    Through the Wardrobe: Exploring the potential of headset augmented reality to provide a Thirdspace immersive media experience

    Get PDF
    This research investigates the potential in employing headset augmented reality (AR) for interactive documentary when the contributors are collaboratively involved in the production process. This research has grown out of the intersection of interactive documentary (incorporating methods from documentary production more broadly), immersive media studies, gender studies and social and visual anthropology.The research explores how headset AR invites a complex interaction amongst the immersant, the physical objects of a place, the physical affordances of the device and the virtual content that is activated. Headset AR affords a porous ‘diegetic bubble’ that integrates multisensory stimuli with physical and virtual elements in a storyworld. Presenting marginalised voices in a headset AR documentary can facilitate a Thirdspace, a hybrid space where physical materiality and virtual media come together simultaneously offering potentially radical and transformative ways of understanding and experiencing the world.To investigate the use of AR headsets for interactive documentary, I conducted research through dialogically engaging with both practice and theory. The research has been practice-based through the process of developing, iterating and exhibiting a headset AR documentary installation, Through the Wardrobe. The production process involved the collaboration of four nonbinary/genderqueer contributors. In addition to contributing their stories, they participated in the processes of interaction design, installation and exhibition of the work. Feedback from immersants also dynamically shaped the iterative process of exhibiting the installation.Both this written thesis and the resulting practice output, the headset AR installation Through the Wardrobe, demonstrate the rigour in my practice-based research

    Augmented Reality Assistance for Surgical Interventions using Optical See-Through Head-Mounted Displays

    Get PDF
    Augmented Reality (AR) offers an interactive user experience via enhancing the real world environment with computer-generated visual cues and other perceptual information. It has been applied to different applications, e.g. manufacturing, entertainment and healthcare, through different AR media. An Optical See-Through Head-Mounted Display (OST-HMD) is a specialized hardware for AR, where the computer-generated graphics can be overlaid directly onto the user's normal vision via optical combiners. Using OST-HMD for surgical intervention has many potential perceptual advantages. As a novel concept, many technical and clinical challenges exist for OST-HMD-based AR to be clinically useful, which motivates the work presented in this thesis. From the technical aspects, we first investigate the display calibration of OST-HMD, which is an indispensable procedure to create accurate AR overlay. We propose various methods to reduce the user-related error, improve robustness of the calibration, and remodel the calibration as a 3D-3D registration problem. Secondly, we devise methods and develop hardware prototype to increase the user's visual acuity of both real and virtual content through OST-HMD, to aid them in tasks that require high visual acuity, e.g. dental procedures. Thirdly, we investigate the occlusion caused by the OST-HMD hardware, which limits the user's peripheral vision. We propose to use alternative indicators to remind the user of unattended environment motion. From the clinical perspective, we identified many clinical use cases where OST-HMD-based AR is potentially helpful, developed applications integrated with current clinical systems, and conducted proof-of-concept evaluations. We first present a "virtual monitor'' for image-guided surgery. It can replace real radiology monitors in the operating room with easier user control and more flexibility in positioning. We evaluated the "virtual monitor'' for simulated percutaneous spine procedures. Secondly, we developed ARssist, an application for the bedside assistant in robotic surgery. The assistant can see the robotic instruments and endoscope within the patient body with ARssist. We evaluated the efficiency, safety and ergonomics of the assistant during two typical tasks: instrument insertion and manipulation. The performance for inexperienced users is significantly improved with ARssist, and for experienced users, the system significantly enhanced their confidence level. Lastly, we developed ARAMIS, which utilizes real-time 3D reconstruction and visualization to aid the laparoscopic surgeon. It demonstrates the concept of "X-ray see-through'' surgery. Our preliminary evaluation validated the application via a peg transfer task, and also showed significant improvement in hand-eye coordination. Overall, we have demonstrated that OST-HMD based AR application provides ergonomic improvements, e.g. hand-eye coordination. In challenging situations or for novice users, the improvements in ergonomic factors lead to improvement in task performance. With continuous effort as a community, optical see-through augmented reality technology will be a useful interventional aid in the near future
    corecore