866 research outputs found

    Exploring the Front Touch Interface for Virtual Reality Headsets

    Full text link
    In this paper, we propose a new interface for virtual reality headset: a touchpad in front of the headset. To demonstrate the feasibility of the front touch interface, we built a prototype device, explored VR UI design space expansion, and performed various user studies. We started with preliminary tests to see how intuitively and accurately people can interact with the front touchpad. Then, we further experimented various user interfaces such as a binary selection, a typical menu layout, and a keyboard. Two-Finger and Drag-n-Tap were also explored to find the appropriate selection technique. As a low-cost, light-weight, and in low power budget technology, a touch sensor can make an ideal interface for mobile headset. Also, front touch area can be large enough to allow wide range of interaction types such as multi-finger interactions. With this novel front touch interface, we paved a way to new virtual reality interaction methods

    Influence of real world and virtual reality on human mid-air pointing accuracy

    Get PDF
    Mid-air pointing is a major gesture for humans to express a direction non-verbally. This work focuses on absolute pointing to reference an object or person which is in sight of the person who performs the pointing gesture. In the future, we see mid-air pointing as one way to interact with objects and smart home environments. However, mid-air pointing could also replace the controller to interact with a virtual environment. Recent work has shown that humans are imprecise while mid-air pointing. Furthermore, previous work has shown a systematic offset while mid-air pointing. In this work, we are reproducing these results and further reveal that the same effect is present in virtual environments. We further show that people point significantly different in a real and virtual environment. Therefore, to correct the systematic offset, we develop different models to determine the actual pointing direction. These models are based on a ground truth study in which we recorded participants' body posture while mid-air pointing. Finally, we validate the models by conducting a second study with 16 new participants. Our results show that we can significantly reduce the offset. We further show that when displaying a cursor indicating the pointing direction the offset can be further reduced. However, when displaying a cursor the pointing time increased in comparison to no cursor.Freihändiges Zeigen ist eine mächtige Geste, um nonverbal Richtungsangaben auszudrücken. Diese Arbeit wird ihren Fokus auf absolutes Zeigen legen um Objekte oder Personen mit direktem Sichtkontakt referenzieren zu können. Wir sehen freihändiges Zeigen für die Zukunft als eine gute Möglichkeit, um mit Objekten und Smart Home Umgebungen zu interagieren. Jedoch könnte freihändiges Zeigen genauso Contoller für die Steuerung virtueller Umgebungen ersetzen. Bisherige Arbeiten haben bereits gezeigt, dass Menschen bei freihändigem Zeigen ungenau sind. Diese Arbeiten konnten weiterhin einen systematischen Fehler bei freihändigem Zeigen nachweisen. Mit dieser Arbeit werden wir diese Fehler reproduzieren und weiter zeigen, dass die gleichen Fehler auch in virtuellen Umgebungen auftreten. Wir werden weiter zeigen, dass Menschen signifkant anders in der Echtwelt und virtuellen Umgebungen zeigen. Daher entwickeln wir unterschiedliche Modelle für das Berechnen der tatsächlichen Zeigerichtung. Diese Modelle bauen auf Daten aus einer ersten Studie auf, während welcher wir Probanden beim Zeigen aufgezeichnet haben. Des weiteren verifzieren wir diese Modelle durch eine zweite Studie mit neuen Probanden. Unsere Ergebnisse zeigen, dass wir den Fehler signifkant reduzieren können. Des weiteren können wir zeigen, dass ein Cursor, welcher die Zeigerichtung anzeigt, den Fehler weiter reduzieren kann. Jedoch steigt die benötigte Zeit für die Zeigegesten durch diesen Cursor an

    How to Evaluate Object Selection and Manipulation in VR? Guidelines from 20 Years of Studies

    Get PDF
    The VR community has introduced many object selection and manipulation techniques during the past two decades. Typically, they are empirically studied to establish their benefts over the state-of-the-art. However, the literature contains few guidelines on how to conduct such studies; standards developed for evaluating 2D interaction often do not apply. This lack of guidelines makes it hard to compare techniques across studies, to report evaluations consistently, and therefore to accumulate or replicate fndings. To build such guidelines, we review 20 years of studies on VR object selection and manipulation. Based on the review, we propose recommendations for designing studies and a checklist for reporting them.We also identify research directions for improving evaluation methods and ofer ideas for how to make studies more ecologically valid and rigorous.</p

    How to Evaluate Object Selection and Manipulation in VR? Guidelines from 20 Years of Studies

    Get PDF
    The VR community has introduced many object selection and manipulation techniques during the past two decades. Typically, they are empirically studied to establish their benefts over the state-of-the-art. However, the literature contains few guidelines on how to conduct such studies; standards developed for evaluating 2D interaction often do not apply. This lack of guidelines makes it hard to compare techniques across studies, to report evaluations consistently, and therefore to accumulate or replicate fndings. To build such guidelines, we review 20 years of studies on VR object selection and manipulation. Based on the review, we propose recommendations for designing studies and a checklist for reporting them.We also identify research directions for improving evaluation methods and ofer ideas for how to make studies more ecologically valid and rigorous.</p

    From seen to unseen: Designing keyboard-less interfaces for text entry on the constrained screen real estate of Augmented Reality headsets

    Get PDF
    Text input is a very challenging task in the constrained screen real-estate of Augmented Reality headsets. Typical keyboards spread over multiple lines and occupy a significant portion of the screen. In this article, we explore the feasibility of single-line text entry systems for smartglasses. We first design FITE, a dynamic keyboard where the characters are positioned depending on their probability within the current input. However, the dynamic layout leads to mediocre text input and low accuracy. We then introduce HIBEY, a fixed 1-line solution that further decreases the screen real-estate usage by hiding the layout. Despite its hidden layout, HIBEY surprisingly performs much better than FITE, and achieves a mean text entry rate of 9.95 words per minute (WPM) with 96.06% accuracy, which is comparable to other state-of-the-art approaches. After 8 days, participants achieve an average of 13.19 WPM. In addition, HIBEY only occupies 13.14% of the screen real estate at the edge region, which is 62.80% smaller than the default keyboard layout on Microsoft Hololens.Peer reviewe

    Shall I describe it or shall I move closer? Verbal references and locomotion in VR collaborative search tasks

    Get PDF
    Research in pointing-based communication within immersive collaborative virtual environments (ICVE) remains a compelling area of study. Previous studies explored techniques to improve accuracy and reduce errors when hand-pointing from a distance. In this study, we explore how users adapt their behaviour to cope with the lack of accuracy during pointing. In an ICVE where users can move (i.e., locomotion) when faced with a lack of laser pointers, pointing inaccuracy can be avoided by getting closer to the object of interest. Alternatively, collaborators can enrich the utterances with details to compensate for the lack of pointing precision. Inspired by previous CSCW remote desktop collaboration, we measure visual coordination, the implicitness of deixis’ utterances and the amount of locomotion. We design an experiment that compares the effects of the presence/absence of laser pointers across hard/easy-to-describe referents. Results show that when users face pointing inaccuracy, they prefer to move closer to the referent rather than enrich the verbal reference

    Analyzing Barehand Input Mappings for Video Timeline Control and Object Pointing on Smart TVs

    Get PDF
    Smart TVs are getting popular in recent few years. Given the emerging feature of distant bare hand control, one challenge is how to perform common tasks with this new input modality. Two tasks are discussed in this thesis including video timeline control task and object pointing task. For video timeline control task, we explore CD gain functions to support seeking and scrubbing tasks. We demonstrate that a linear CD gain function performs better comparing with either a constant function or generalised logistic function (GLF). In particular, Linear gain is faster than a GLF and has lower error rate than Constant gain. Furthermore, Linear and GLF gains' average temporal error when targeting a one second interval on a two hour timeline (+/- 5s) is less than one third the error of a Constant gain. For object pointing task, we design five select strategies to compare the performance, including one Positional based mapping, one Rate based mapping, one Positional + Rate based mapping and two Traditional TV remote style mappings. We picked the first three techniques for our user study. Through a series of experiments, we demonstrate that Positional mapping is faster than other mappings when the target is visible but requires many clutches in large targeting spaces. Rate-based mapping is, in contrast, preferred by participants due to its perceived lower effort, despite being slightly harder to learn initially. Tradeoffs in the design of target selection in smart tv displays are discussed

    Study of the interaction with a virtual 3D environment displayed on a smartphone

    Get PDF
    Les environnements virtuels à 3D (EV 3D) sont de plus en plus utilisés dans différentes applications telles que la CAO, les jeux ou la téléopération. L'évolution des performances matérielles des Smartphones a conduit à l'introduction des applications 3D sur les appareils mobiles. En outre, les Smartphones offrent de nouvelles capacités bien au-delà de la communication vocale traditionnelle qui sont consentis par l'intégrité d'une grande variété de capteurs et par la connectivité via Internet. En conséquence, plusieurs intéressantes applications 3D peuvent être conçues en permettant aux capacités de l'appareil d'interagir dans un EV 3D. Sachant que les Smartphones ont de petits et aplatis écrans et que EV 3D est large, dense et contenant un grand nombre de cibles de tailles différentes, les appareils mobiles présentent certaines contraintes d'interaction dans l'EV 3D comme : la densité de l'environnement, la profondeur de cibles et l'occlusion. La tâche de sélection fait face à ces trois problèmes pour sélectionner une cible. De plus, la tâche de sélection peut être décomposée en trois sous-tâches : la Navigation, le Pointage et la Validation. En conséquence, les chercheurs dans un environnement virtuel 3D ont développé de nouvelles techniques et métaphores pour l'interaction en 3D afin d'améliorer l'utilisation des applications 3D sur les appareils mobiles, de maintenir la tâche de sélection et de faire face aux problèmes ou facteurs affectant la performance de sélection. En tenant compte de ces considérations, cette thèse expose un état de l'art des techniques de sélection existantes dans un EV 3D et des techniques de sélection sur Smartphone. Il expose les techniques de sélection dans un EV 3D structurées autour des trois sous-tâches de sélection: navigation, pointage et validation. En outre, il décrit les techniques de désambiguïsation permettant de sélectionner une cible parmi un ensemble d'objets présélectionnés. Ultérieurement, il expose certaines techniques d'interaction décrites dans la littérature et conçues pour être implémenter sur un Smartphone. Ces techniques sont divisées en deux groupes : techniques effectuant des tâches de sélection bidimensionnelle sur un Smartphone et techniques exécutant des tâches de sélection tridimensionnelle sur un Smartphone. Enfin, nous exposons les techniques qui utilisaient le Smartphone comme un périphérique de saisie. Ensuite, nous discuterons la problématique de sélection dans un EV 3D affichée sur un Smartphone. Il expose les trois problèmes identifiés de sélection : la densité de l'environnement, la profondeur des cibles et l'occlusion. Ensuite, il établit l'amélioration offerte par chaque technique existante pour la résolution des problèmes de sélection. Il analyse les atouts proposés par les différentes techniques, la manière dont ils éliminent les problèmes, leurs avantages et leurs inconvénients. En outre, il illustre la classification des techniques de sélection pour un EV 3D en fonction des trois problèmes discutés (densité, profondeur et occlusion) affectant les performances de sélection dans un environnement dense à 3D. Hormis pour les jeux vidéo, l'utilisation d'environnement virtuel 3D sur Smartphone n'est pas encore démocratisée. Ceci est dû au manque de techniques d'interaction proposées pour interagir avec un dense EV 3D composé de nombreux objets proches les uns des autres et affichés sur un petit écran aplati et les problèmes de sélection pour afficher l' EV 3D sur un petit écran plutôt sur un grand écran. En conséquence, cette thèse se concentre sur la proposition et la description du fruit de cette étude : la technique d'interaction DichotoZoom. Elle compare et évalue la technique proposée à la technique de circulation suggérée par la littérature. L'analyse comparative montre l'efficacité de la technique DichotoZoom par rapport à sa contrepartie. Ensuite, DichotoZoom a été évalué selon les différentes modalités d'interaction disponibles sur les Smartphones. Cette évaluation montre la performance de la technique de sélection proposée basée sur les quatre modalités d'interaction suivantes : utilisation de boutons physiques ou sous forme de composants graphiques, utilisation d'interactions gestuelles via l'écran tactile ou le déplacement de l'appareil lui-même. Enfin, cette thèse énumère nos contributions dans le domaine des techniques d'interaction 3D utilisées dans un environnement virtuel 3D dense affiché sur de petits écrans et propose des travaux futurs.3D Virtual Environments (3D VE) are more and more used in different applications such as CAD, games, or teleoperation. Due to the improvement of smartphones hardware performance, 3D applications were also introduced to mobile devices. In addition, smartphones provide new computing capabilities far beyond the traditional voice communication. They are permitted by the variety of built-in sensors and the internet connectivity. In consequence, interesting 3D applications can be designed by enabling the device capabilities to interact in a 3D VE. Due to the fact that smartphones have small and flat screens and that a 3D VE is wide and dense with a large number of targets of various sizes, mobile devices present some constraints in interacting on the 3D VE like: the environment density, the depth of targets and the occlusion. The selection task faces these three problems to select a target. In addition, the selection task can be decomposed into three subtasks: Navigation, Pointing and Validation. In consequence, researchers in 3D virtual environment have developed new techniques and metaphors for 3D interaction to improve 3D application usability on mobile devices, to support the selection task and to face the problems or factors affecting selection performance. In light of these considerations, this thesis exposes a state of the art of the existing selection techniques in 3D VE and the selection techniques on smartphones. It exposes the selection techniques in 3D VE structured around the selection subtasks: navigation, pointing and validation. Moreover, it describes disambiguation techniques providing the selection of a target from a set of pre-selected objects. Afterward, it exposes some interaction techniques described in literature and designed for implementation on Smartphone. These techniques are divided into two groups: techniques performing two-dimensional selection tasks on smartphones, and techniques performing three-dimensional selection tasks on smartphones. Finally, we expose techniques that used the smartphone as an input device. Then, we will discuss the problematic of selecting in 3D VE displayed on a Smartphone. It exposes the three identified selection problems: the environment density, the depth of targets and the occlusion. Afterward, it establishes the enhancement offered by each existing technique in solving the selection problems. It analysis the assets proposed by different techniques, the way they eliminates the problems, their advantages and their inconvenient. Furthermore, it illustrates the classification of the selection techniques for 3D VE according to the three discussed problems (density, depth and occlusion) affecting the selection performance in a dense 3D VE. Except for video games, the use of 3D virtual environment (3D VE) on Smartphone has not yet been popularized. This is due to the lack of interaction techniques to interact with a dense 3D VE composed of many objects close to each other and displayed on a small and flat screen and the selection problems to display the 3D VE on a small screen rather on a large screen. Accordingly, this thesis focuses on defining and describing the fruit of this study: DichotoZoom interaction technique. It compares and evaluates the proposed technique to the Circulation technique, suggested by the literature. The comparative analysis shows the effectiveness of DichotoZoom technique compared to its counterpart. Then, DichotoZoom was evaluated in different modalities of interaction available on Smartphones. It reports on the performance of the proposed selection technique based on the following four interaction modalities: using physical buttons, using graphical buttons, using gestural interactions via touchscreen or moving the device itself. Finally, this thesis lists our contributions to the field of 3D interaction techniques used in a dense 3D virtual environment displayed on small screens and proposes some future works
    • …
    corecore