17 research outputs found

    Designing Disambiguation Techniques for Pointing in the Physical World

    Get PDF
    International audienceSeveral ways for selecting physical objects exist, including touching and pointing at them. Allowing the user to interact at a distance by pointing at physical objects can be challenging when the environment contains a large number of interactive physical objects, possibly occluded by other everyday items. Previous pointing techniques highlighted the need for disambiguation techniques. Addressing this challenge, this paper contributes a design space that organizes along groups and axes a set of options for designers to relevantly (1) describe, (2) classify, and (3) design disambiguation techniques. First, we have not found techniques in the literature yet that our design space could not describe. Second, all the techniques show a different path along the axes of our design space. Third, it allows defining of several new paths/solutions that have not yet been explored. We illustrate this generative power with the example of such a designed technique, Physical Pointing Roll (P2Roll)

    Dense and Dynamic 3D Selection for Game-Based Virtual Environments

    Full text link

    One view is not enough: review of and encouragement for multiple and alternative representations in 3D and immersive visualisation

    Get PDF
    The opportunities for 3D visualisations are huge. People can be immersed inside their data, interface with it in natural ways, and see it in ways that are not possible on a traditional desktop screen. Indeed, 3D visualisations, especially those that are immersed inside head-mounted displays are becoming popular. Much of this growth is driven by the availability, popularity and falling cost of head-mounted displays and other immersive technologies. However, there are also challenges. For example, data visualisation objects can be obscured, important facets missed (perhaps behind the viewer), and the interfaces may be unfamiliar. Some of these challenges are not unique to 3D immersive technologies. Indeed, developers of traditional 2D exploratory visualisation tools would use alternative views, across a multiple coordinated view (MCV) system. Coordinated view interfaces help users explore the richness of the data. For instance, an alphabetical list of people in one view shows everyone in the database, while a map view depicts where they live. Each view provides a different task or purpose. While it is possible to translate some desktop interface techniques into the 3D immersive world, it is not always clear what equivalences would be. In this paper, using several case studies, we discuss the challenges and opportunities for using multiple views in immersive visualisation. Our aim is to provide a set of concepts that will enable developers to perform critical thinking, creative thinking and push the boundaries of what is possible with 3D and immersive visualisation. In summary developers should consider how to integrate many views, techniques and presentation styles, and one view is not enough when using 3D and immersive visualisations

    Spherical tangible user interfaces in mixed reality

    Get PDF
    The popularity of virtual reality (VR) and augmented reality (AR) has grown rapidly in recent years, both in academia and commercial applications. This is rooted in technological advances and affordable head-mounted displays (HMDs). Whether in games or professional applications, HMDs allow for immersive audio-visual experiences that transport users to compelling digital worlds or convincingly augment the real world. However, as true to life as these experiences have become in a visual and auditory sense, the question remains how we can model interaction with these virtual environments in an equally natural way. Solutions providing intuitive tangible interaction would bear the potential to fundamentally make the mixed reality (MR) spectrum more accessible, especially for novice users. Research on tangible user interfaces (TUIs) has pursued this goal by coupling virtual to real-world objects. Tangible interaction has been shown to provide significant advantages for numerous use cases. Spherical tangible user interfaces (STUIs) present a special case of these devices, mainly due to their ability to fully embody any spherical virtual content. In general, spherical devices increasingly transition from mere technology demonstrators to usable multi-modal interfaces. For this dissertation, we explore the application of STUIs in MR environments primarily by comparing them to state-of-the-art input techniques in four different contexts. Thus, investigating the questions of embodiment, overall user performance, and the ability of STUIs relying on their shape alone to support complex interaction techniques. First, we examine how spherical devices can embody immersive visualizations. In an initial study, we test the practicality of a tracked sphere embodying three kinds of visualizations. We examine simulated multi-touch interaction on a spherical surface and compare two different sphere sizes to VR controllers. Results confirmed our prototype's viability and indicate improved pattern recognition and advantages for the smaller sphere. Second, to further substantiate VR as a prototyping technology, we demonstrate how a large tangible spherical display can be simulated in VR. We show how VR can fundamentally extend the capabilities of real spherical displays by adding physical rotation to a simulated multi-touch surface. After a first study evaluating the general viability of simulating such a display in VR, our second study revealed the superiority of a rotating spherical display. Third, we present a concept for a spherical input device for tangible AR (TAR). We show how such a device can provide basic object manipulation capabilities utilizing two different modes and compare it to controller techniques with increasing hardware complexity. Our results show that our button-less sphere-based technique is only outperformed by a mode-less controller variant that uses physical buttons and a touchpad. Fourth, to study the intrinsic problem of VR locomotion, we explore two opposing approaches: a continuous and a discrete technique. For the first, we demonstrate a spherical locomotion device supporting two different locomotion paradigms that propel a user's first-person avatar accordingly. We found that a position control paradigm applied to a sphere performed mostly superior in comparison to button-supported controller interaction. For discrete locomotion, we evaluate the concept of a spherical world in miniature (SWIM) used for avatar teleportation in a large virtual environment. Results showed that users subjectively preferred the sphere-based technique over regular controllers and on average, achieved lower task times and higher accuracy. To conclude the thesis, we discuss our findings, insights, and subsequent contribution to our central research questions to derive recommendations for designing techniques based on spherical input devices and an outlook on the future development of spherical devices in the mixed reality spectrum.Die Popularität von Virtual Reality (VR) und Augmented Reality (AR) hat in den letzten Jahren rasant zugenommen, sowohl im akademischen Bereich als auch bei kommerziellen Anwendungen. Dies ist in erster Linie auf technologische Fortschritte und erschwingliche Head-Mounted Displays (HMDs) zurückzuführen. Ob in Spielen oder professionellen Anwendungen, HMDs ermöglichen immersive audiovisuelle Erfahrungen, die uns in fesselnde digitale Welten versetzen oder die reale Welt überzeugend erweitern. Doch so lebensecht diese Erfahrungen in visueller und auditiver Hinsicht geworden sind, so bleibt doch die Frage, wie die Interaktion mit diesen virtuellen Umgebungen auf ebenso natürliche Weise gestaltet werden kann. Lösungen, die eine intuitive, greifbare Interaktion ermöglichen, hätten das Potenzial, das Spektrum der Mixed Reality (MR) fundamental zugänglicher zu machen, insbesondere für Unerfahrene. Die Forschung an Tangible User Interfaces (TUIs) hat dieses Ziel durch das Koppeln virtueller und realer Objekte verfolgt und so hat sich gezeigt, dass greifbare Interaktion für zahlreiche Anwendungsfälle signifikante Vorteile bietet. Spherical Tangible User Interfaces (STUIs) stellen einen Spezialfall von greifbaren Interfaces dar, insbesondere aufgrund ihrer Fähigkeit, beliebige sphärische virtuelle Inhalte vollständig verkörpern zu können. Generell entwickeln sich sphärische Geräte zunehmend von reinen Technologiedemonstratoren zu nutzbaren multimodalen Instrumenten, die auf eine breite Palette von Interaktionstechniken zurückgreifen können. Diese Dissertation untersucht primär die Anwendung von STUIs in MR-Umgebungen durch einen Vergleich mit State-of-the-Art-Eingabetechniken in vier verschiedenen Kontexten. Dies ermöglicht die Erforschung der Bedeutung der Verkörperung virtueller Objekte, der Benutzerleistung im Allgemeinen und der Fähigkeit von STUIs, die sich lediglich auf ihre Form verlassen, komplexe Interaktionstechniken zu unterstützen. Zunächst erforschen wir, wie sphärische Geräte immersive Visualisierungen verkörpern können. Eine erste Studie ergründet die Praxistauglichkeit einer einfach konstruierten, getrackten Kugel, die drei Arten von Visualisierungen verkörpert. Wir testen simulierte Multi-Touch-Interaktion auf einer sphärischen Oberfläche und vergleichen zwei Kugelgrößen mit VR-Controllern. Die Ergebnisse bestätigten die Praxistauglichkeit des Prototyps und deuten auf verbesserte Mustererkennung sowie Vorteile für die kleinere Kugel hin. Zweitens, um die Validität von VR als Prototyping-Technologie zu bekräftigen, demonstrieren wir, wie ein großes, anfassbares sphärisches Display in VR simuliert werden kann. Es zeigt sich, wie VR die Möglichkeiten realer sphärischer Displays substantiell erweitern kann, indem eine simulierte Multi-Touch-Oberfläche um die Fähigkeit der physischen Rotation ergänzt wird. Nach einer ersten Studie, die die generelle Machbarkeit der Simulation eines solchen Displays in VR evaluiert, zeigte eine zweite Studie die Überlegenheit des drehbaren sphärischen Displays. Drittens präsentiert diese Arbeit ein Konzept für ein sphärisches Eingabegerät für Tangible AR (TAR). Wir zeigen, wie ein solches Werkzeug grundlegende Fähigkeiten zur Objektmanipulation unter Verwendung von zwei verschiedenen Modi bereitstellen kann und vergleichen es mit Eingabetechniken deren Hardwarekomplexität zunehmend steigt. Unsere Ergebnisse zeigen, dass die kugelbasierte Technik, die ohne Knöpfe auskommt, nur von einer Controller-Variante übertroffen wird, die physische Knöpfe und ein Touchpad verwendet und somit nicht auf unterschiedliche Modi angewiesen ist. Viertens, um das intrinsische Problem der Fortbewegung in VR zu erforschen, untersuchen wir zwei gegensätzliche Ansätze: eine kontinuierliche und eine diskrete Technik. Für die erste präsentieren wir ein sphärisches Eingabegerät zur Fortbewegung, das zwei verschiedene Paradigmen unterstützt, die einen First-Person-Avatar entsprechend bewegen. Es zeigte sich, dass das Paradigma der direkten Positionssteuerung, angewandt auf einen Kugel-Controller, im Vergleich zu regulärer Controller-Interaktion, die zusätzlich auf physische Knöpfe zurückgreifen kann, meist besser abschneidet. Im Bereich der diskreten Fortbewegung evaluieren wir das Konzept einer kugelförmingen Miniaturwelt (Spherical World in Miniature, SWIM), die für die Avatar-Teleportation in einer großen virtuellen Umgebung verwendet werden kann. Die Ergebnisse zeigten eine subjektive Bevorzugung der kugelbasierten Technik im Vergleich zu regulären Controllern und im Durchschnitt eine schnellere Lösung der Aufgaben sowie eine höhere Genauigkeit. Zum Abschluss der Arbeit diskutieren wir unsere Ergebnisse, Erkenntnisse und die daraus resultierenden Beiträge zu unseren zentralen Forschungsfragen, um daraus Empfehlungen für die Gestaltung von Techniken auf Basis kugelförmiger Eingabegeräte und einen Ausblick auf die mögliche zukünftige Entwicklung sphärischer Eingabegräte im Mixed-Reality-Bereich abzuleiten

    A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars

    Get PDF
    Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer. However, this user control disengagement is one of the main issues related with the user acceptance of this technology. Users do not seem to be comfortable with the idea of giving all the decision power to the vehicle. In addition, there can be location awareness situations where the user makes a spontaneous decision and requires some type of vehicle control. Such is the case of stopping at a particular point of interest or taking a detour in the pre-calculated autonomous route of the car. Vehicle manufacturers\u27 maintain the steering wheel as a control element, allowing the driver to take over the vehicle if needed or wanted. This causes a constraint in the previously mentioned human vehicle interaction flexibility. Thus, there is an unsolved dilemma between providing users enough control over the autonomous vehicle and route so they can make spontaneous decision, and interaction flexibility inside the car. This dissertation proposes the use of a voice and pointing gesture human vehicle interaction system to solve this dilemma. Voice and pointing gestures have been identified as natural interaction techniques to guide and command mobile robots, potentially providing the needed user control over the car. On the other hand, they can be executed anywhere inside the vehicle, enabling interaction flexibility. The objective of this dissertation is to provide a strategy to support this system. For this, a method based on pointing rays intersections for the computation of the point of interest (POI) that the user is pointing to is developed. Simulation results show that this POI computation method outperforms the traditional ray-casting based by 76.5% in cluttered environments and 36.25% in combined cluttered and non-cluttered scenarios. The whole system is developed and demonstrated using a robotics simulator framework. The simulations show how voice and pointing commands performed by the user update the predefined autonomous path, based on the recognized command semantics. In addition, a dialog feedback strategy is proposed to solve conflicting situations such as ambiguity in the POI identification. This additional step is able to solve all the previously mentioned POI computation inaccuracies. In addition, it allows the user to confirm, correct or reject the performed commands in case the system misunderstands them

    LenSelect: Object Selection in Virtual Environments by Dynamic Object Scaling

    Get PDF
    AbstractWe present a novel selection technique for VR called LenSelect. The main idea is to decrease the Index of Difficulty (ID) according to Fitts’ Law by dynamically increasing the size of the potentially selectable objects. This facilitates the selection process especially in cases of small, distant or partly occluded objects, but also for moving targets. In order to evaluate our method, we have defined a set of test scenarios that covers a broad range of use cases, in contrast to often used simpler scenes. Our test scenarios include practically relevant scenarios with realistic objects but also synthetic scenes, all of which are available for download. We have evaluated our method in a user study and compared the results to two state-of-the-art selection techniques and the standard ray-based selection. Our results show that LenSelect performs similar to the fastest method, which is ray-based selection, while significantly reducing the error rate by 44%

    Intelligent Selection Techniques For Virtual Environments

    Get PDF
    Selection in 3D games and simulations is a well-studied problem. Many techniques have been created to address many of the typical scenarios a user could experience. For any single scenario with consistent conditions, there is likely a technique which is well suited. If there isn\u27t, then there is an opportunity for one to be created to best suit the expected conditions of that new scenario. It is critical that the user be given an appropriate technique to interact with their environment. Without it, the entire experience is at risk of becoming burdensome and not enjoyable. With all of the different possible scenarios, it can become problematic when two or more are part of the same program. If they are put closely together, or even intertwined, then the developer is often forced to pick a single technique that works so-so in both, but is likely not optimal for either, or maybe optimal in just one of them. In this case, the user is left to perform selections with a technique that is lacking in one way or another, which can increase errors and frustration. In our research, we have outlined different selection scenarios, all of which were classified by their level of object density (number of objects in scene) and object velocity. We then performed an initial study on how it impacts performance of various selection techniques, including a new selection technique that we developed just for this test, called Expand. Our results showed, among other things, that a standard Raycast technique works well in slow moving and sparse environments, while revealing that our new Expand technique works well in denser environments. With the results from our first study, we sought to develop something that would bridge the gap in performance between those selection techniques tested. Our idea was a framework that could harvest several different selection techniques and determine which was the most optimal at any time. Each selection technique would report how effective it was, given the provided scenario conditions. The framework was responsible for activating the appropriate selection technique when the user made a selection attempt. With this framework in hand, we performed two additional user studies to determine how effective it could be in actual use, and to identify its strengths and weaknesses. Each study compared several selection techniques individually against the framework which utilized them collectively, picking the most suitable. Again, the same scenarios from our first study were reused. From these studies, we gained a deeper understanding of the many challenges associated with automatic selection technique determination. The results from these two studies showed that transitioning between techniques was potentially viable, but rife with design challenges that made its optimization quite difficult. In an effort to sidestep some of the issues surrounding the switching of discrete techniques, we sought to attack the problem from the other direction, and make a single technique act similarly to two techniques, adjusting dynamically to conditions. We performed a user study to analyze the performance of such a technique, with promising results. While the qualitative differences were small, the user feedback did indicate that users preferred this technique over the others, which were static in nature. Finally, we sought to gain a deeper understanding of existing selection techniques that were dynamic in nature, and study how they were designed, and how they could be improved. We scrutinized the attributes of each technique that were already being adjusted dynamically or that could be adjusted and innovated new ways in which the technique could be improved upon. Within this analysis, we also gave thought to how each technique could be best integrated into the Auto-Select framework we proposed earlier. This overall analysis of the latest selection techniques left us with an array of new variants that warrant being created and tested against their existing versions. Our overall research goal was to perform an analysis of selection techniques that intelligently adapt to their environment. We believe that we achieved this by performing several iterative development cycles, including user studies and ultimately leading to innovation in the field of selection. We conclude our research with yet more questions left to be answered. We intend to pursue further research regarding some of these questions, as time permits

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    The State of the Art of Spatial Interfaces for 3D Visualization

    Get PDF
    International audienceWe survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under-explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction
    corecore