8,776 research outputs found

    Remote and Deviceless Manipulation of Virtual Objects in Mixed Reality

    Get PDF
    Deviceless manipulation of virtual objects in mixed reality (MR) environments is technically achievable with the current generation of Head-Mounted Displays (HMDs), as they track finger movements and allow you to use gestures to control the transformation. However, when the object manipulation is performed at some distance, and when the transform includes scaling, it is not obvious how to remap the hand motions over the degrees of freedom of the object. Different solutions have been implemented in software toolkits, but there are still usability issues and a lack of clear guidelines for the interaction design. We present a user study evaluating three solutions for the remote translation, rotation, and scaling of virtual objects in the real environment without using handheld devices. We analyze their usability on the practical task of docking virtual cubes on a tangible shelf from varying distances. The outcomes of our study show that the usability of the methods is strongly affected by the use of separate or integrated control of the degrees of freedom, by the use of the hands in a symmetric or specialized way, by the visual feedback, and by the previous experience of the users

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Gestural interaction in Virtual Environments: user studies and applications

    Get PDF
    With the currently available technology, there has been an increased interest in the development of virtual reality (VR) applications, some of those already becoming commercial products. This fact also rose many issues related to their usability. One of the main challenges is the design of interfaces to interact with these immersive virtual environments (IVE), in particular for those setups relying on hand tracking as a mean of input and in general with devices that stray away from the standard choices such as keyboards and mice. Finding appropriate ways to deal with the usability issues is key to the success of applications in VR, not only for entertainment purposes but also for professional application in medical and engineering field. The research on this topic in the last years has been quite active and several problems were identified as relevant to achieve satisfying usability results. This research project aims to tackle some of those critical aspects strictly related to interaction in IVEs. It is firstly focused on object manipulation. An initial evaluation allowed us to highlight the nature of some critical issues from which we derived some design guidelines for manipulation techniques. We proposed a different number of solutions and performed user tests to validate them. Secondly, we tried to verify the feasibility of gestural interfaces in applications by developing and testing a gesture recognition algorithm based on 3D hand trajectories. This work showed promising results in terms of accuracy hinting at the possibility of a reliable gestural interface for VR applications

    Dynamic Composite Data Physicalization Using Wheeled Micro-Robots

    Get PDF
    This paper introduces dynamic composite physicalizations, a new class of physical visualizations that use collections of self-propelled objects to represent data. Dynamic composite physicalizations can be used both to give physical form to well-known interactive visualization techniques, and to explore new visualizations and interaction paradigms. We first propose a design space characterizing composite physicalizations based on previous work in the fields of Information Visualization and Human Computer Interaction. We illustrate dynamic composite physicalizations in two scenarios demonstrating potential benefits for collaboration and decision making, as well as new opportunities for physical interaction. We then describe our implementation using wheeled micro-robots capable of locating themselves and sensing user input, before discussing limitations and opportunities for future work

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die PopularitĂ€t von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurĂŒckzufĂŒhren. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt ĂŒberzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natĂŒrliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hĂ€tten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugĂ€nglicher zu machen, insbesondere fĂŒr Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion fĂŒr zahlreiche AnwendungsfĂ€lle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer FĂ€higkeit, beliebige sphĂ€rische virtuelle Inhalte vollstĂ€ndig verkörpern zu können. Generell entwickeln sich sphĂ€rische GerĂ€te zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurĂŒckgreifen können. Diese Dissertation untersucht primĂ€r die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der FĂ€higkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstĂŒtzen. ZunĂ€chst erforschen wir, wie sphĂ€rische GerĂ€te immersive Visualisierungen verkörpern können. Eine erste Studie ergrĂŒndet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphĂ€rischen OberflĂ€che und vergleichen zwei KugelgrĂ¶ĂŸen mit VR-Controllern. Die Ergebnisse bestĂ€tigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile fĂŒr die kleinere Kugel hin. Zweitens, um die ValiditĂ€t von VR als Prototyping-Technologie zu bekrĂ€ftigen, demonstrieren wir, wie ein großes, anfassbares sphĂ€risches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphĂ€rischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-OberflĂ€che um die FĂ€higkeit der physischen Rotation ergĂ€nzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphĂ€rischen Displays. Drittens prĂ€sentiert diese Arbeit ein Konzept fĂŒr ein sphĂ€risches EingabegerĂ€t fĂŒr Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende FĂ€higkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren HardwarekomplexitĂ€t zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante ĂŒbertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensĂ€tzliche AnsĂ€tze: eine kontinuierliche und eine diskrete Technik. FĂŒr die erste prĂ€sentieren wir ein sphĂ€risches EingabegerĂ€t zur Fortbewegung, das zwei verschiedene Paradigmen unterstĂŒtzt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulĂ€rer Controller-Interaktion, die zusĂ€tzlich auf physische Knöpfe zurĂŒckgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die fĂŒr die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulĂ€ren Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden BeitrĂ€ge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen fĂŒr die Gestaltung von Techniken auf Basis kugelförmiger EingabegerĂ€te und einen Ausblick auf die mögliche zukĂŒnftige Entwicklung sphĂ€rischer EingabegrĂ€te im Mixed-Reality-Bereich abzuleiten

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally
    • 

    corecore