17 research outputs found

    Creating and manipulating 3D paths with mixed reality spatial interfaces

    Get PDF
    Mixed reality offers unique opportunities to situate complex tasks within spatial environments. One such task is the creation and manipulation of intricate, three-dimensional paths, which remains a crucial challenge in many fields, including animation, architecture, and robotics. This paper presents an investigation into the possibilities of spatially situated path creation using new virtual and augmented reality technologies and examines how these technologies can be leveraged to afford more intuitive and natural path creation. We present a formative study (n = 20) evaluating an initial path planning interface situated in the context of augmented reality and human-robot interaction. Based on the findings of this study, we detail the development of two novel techniques for spatially situated path planning and manipulation that afford intuitive, expressive path creation at varying scales. We describe a comprehensive user study (n = 36) investigating the effectiveness, learnability, and efficiency of both techniques when paired with a range of canonical placement strategies. The results of this study confirm the usability of these interaction metaphors and provide further insight into how spatial interaction can be discreetly leveraged to enable interaction at scale. Overall, this work contributes to the development of 3DUIs that expand the possibilities for situating path-driven tasks in spatial environments

    Viewport- and World-based Personal Device Point-Select Interactions in the Augmented Reality

    Get PDF
    Personal smart devices have demonstrated a variety of efficient techniques for pointing and selecting on physical displays. However, when migrating these input techniques to augmented reality, it is both unclear what the relative performance of different techniques will be given the immersive nature of the environment, and it is unclear how viewport-based versus world-based pointing methods will impact performance. To better understand the impact of device and viewing perspectives on pointing in augmented reality, in this thesis, we present the results of two controlled experiments comparing pointing conditions that leverage various smartphone- and smartwatch-based external display pointing techniques and examine viewport-based versus world-based target acquisition paradigms. Our results demonstrate that viewport-based techniques offer faster selection and that both smartwatch- and smartphone-based pointing techniques represent high-performance options for performing distant target acquisition tasks in augmented reality

    Cooperative object manipulation in collaborative virtual environments

    Get PDF

    LenSelect: Object Selection in Virtual Environments by Dynamic Object Scaling

    Get PDF
    AbstractWe present a novel selection technique for VR called LenSelect. The main idea is to decrease the Index of Difficulty (ID) according to Fitts’ Law by dynamically increasing the size of the potentially selectable objects. This facilitates the selection process especially in cases of small, distant or partly occluded objects, but also for moving targets. In order to evaluate our method, we have defined a set of test scenarios that covers a broad range of use cases, in contrast to often used simpler scenes. Our test scenarios include practically relevant scenarios with realistic objects but also synthetic scenes, all of which are available for download. We have evaluated our method in a user study and compared the results to two state-of-the-art selection techniques and the standard ray-based selection. Our results show that LenSelect performs similar to the fastest method, which is ray-based selection, while significantly reducing the error rate by 44%

    Multimodal interactions in virtual environments using eye tracking and gesture control.

    Get PDF
    Multimodal interactions provide users with more natural ways to interact with virtual environments than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform virtual content selection and manipulation conveniently through the use of a combination of gaze and other hand control techniques/pointing devices, in this thesis, mid-air gestures. To establish a synergy between the two modalities and evaluate the affordance of this novel multimodal interaction technique, it is important to understand their behavioural patterns and relationship, as well as any possible perceptual conflicts and interactive ambiguities. More specifically, evidence shows that eye movements lead hand movements but the question remains that whether the leading relationship is similar when interacting using a pointing device. Moreover, as gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on users perception on the exact spatial mapping between the virtual space and the physical space. It raises an underexplored issue that whether gaze can introduce misalignment of the spatial mapping and lead to users misperception and interactive errors. Furthermore, the accuracy of eye tracking and mid-air gesture control are not comparable with the traditional pointing techniques (e.g., mouse) yet. This may cause pointing ambiguity when fine grainy interactions are required, such as selecting in a dense virtual scene where proximity and occlusion are prone to occur. This thesis addresses these concerns through experimental studies and theoretical analysis that involve paradigm design, development of interactive prototypes, and user study for verification of assumptions, comparisons and evaluations. Substantial data sets were obtained and analysed from each experiment. The results conform to and extend previous empirical findings that gaze leads pointing devices movements in most cases both spatially and temporally. It is testified that gaze does introduce spatial misperception and three methods (Scaling, Magnet and Dual-gaze) were proposed and proved to be able to reduce the impact caused by this perceptual conflict where Magnet and Dual-gaze can deliver better performance than Scaling. In addition, a coarse-to-fine solution is proposed and evaluated to compensate the degradation introduced by eye tracking inaccuracy, which uses a gaze cone to detect ambiguity followed by a gaze probe for decluttering. The results show that this solution can enhance the interaction accuracy but requires a compromise on efficiency. These findings can be used to inform a more robust multimodal inter- face design for interactions within virtual environments that are supported by both eye tracking and mid-air gesture control. This work also opens up a technical pathway for the design of future multimodal interaction techniques, which starts from a derivation from natural correlated behavioural patterns, and then considers whether the design of the interaction technique can maintain perceptual constancy and whether any ambiguity among the integrated modalities will be introduced

    3D Pointing with Everyday Devices: Speed, Occlusion, Fatigue

    Get PDF
    In recent years, display technology has evolved to the point where displays can be both non-stereoscopic and stereoscopic, and 3D environments can be rendered realistically on many types of displays. From movie theatres and shopping malls to conference rooms and research labs, 3D information can be deployed seamlessly. Yet, while 3D environments are commonly displayed in desktop settings, there are virtually no examples of interactive 3D environments deployed within ubiquitous environments, with the exception of console gaming. At the same time, immersive 3D environments remain - in users' minds - associated with professional work settings and virtual reality laboratories. An excellent opportunity for 3D interactive engagements is being missed not because of economic factors, but due to the lack of interaction techniques that are easy to use in ubiquitous, everyday environments. In my dissertation, I address the lack of support for interaction with 3D environments in ubiquitous settings by designing, implementing, and evaluating 3D pointing techniques that leverage a smartphone or a smartwatch as an input device. I show that mobile and wearable devices may be especially beneficial as input devices for casual use scenarios, where specialized 3D interaction hardware may be impractical, too expensive or unavailable. Such scenarios include interactions with home theatres, intelligent homes, in workplaces and classrooms, with movie theatre screens, in shopping malls, at airports, during conference presentations and countless other places and situations. Another contribution of my research is to increase the potential of mobile and wearable devices for efficient interaction at a distance. I do so by showing that such interactions are feasible when realized with the support of a modern smartphone or smartwatch. I also show how multimodality, when realized with everyday devices, expands and supports 3D pointing. In particular, I show how multimodality helps to address the challenges of 3D interaction: performance issues related to the limitations of the human motor system, interaction with occluded objects and related problem of perception of depth on non-stereoscopic screens, and user subjective fatigue, measured with NASA TLX as perceived workload, that results from providing spatial input for a prolonged time. I deliver these contributions by designing three novel 3D pointing techniques that support casual, "walk-up-and-use" interaction at a distance and are fully realizable using off-the-shelf mobile and wearable devices available today. The contributions provide evidence that democratization of 3D interaction can be realized by leveraging the pervasiveness of a device that users already carry with them: a smartphone or a smartwatch.4 month

    Distant pointing in desktop collaborative virtual environments

    Get PDF
    Deictic pointing—pointing at things during conversations—is natural and ubiquitous in human communication. Deictic pointing is important in the real world; it is also important in collaborative virtual environments (CVEs) because CVEs are 3D virtual environments that resemble the real world. CVEs connect people from different locations, allowing them to communicate and collaborate remotely. However, the interaction and communication capabilities of CVEs are not as good as those in the real world. In CVEs, people interact with each other using avatars (the visual representations of users). One problem of avatars is that they are not expressive enough when compare to what we can do in the real world. In particular, deictic pointing has many limitations and is not well supported. This dissertation focuses on improving the expressiveness of distant pointing—where referents are out of reach—in desktop CVEs. This is done by developing a framework that guides the design and development of pointing techniques; by identifying important aspects of distant pointing through observation of how people point at distant referents in the real world; by designing, implementing, and evaluating distant-pointing techniques; and by providing a set of guidelines for the design of distant pointing in desktop CVEs. The evaluations of distant-pointing techniques examine whether pointing without extra visual effects (natural pointing) has sufficient accuracy; whether people can control free arm movement (free pointing) along with other avatar actions; and whether free and natural pointing are useful and valuable in desktop CVEs. Overall, this research provides better support for deictic pointing in CVEs by improving the expressiveness of distant pointing. With better pointing support, gestural communication can be more effective and can ultimately enhance the primary function of CVEs—supporting distributed collaboration

    Study of the interaction with a virtual 3D environment displayed on a smartphone

    Get PDF
    Les environnements virtuels Ă  3D (EV 3D) sont de plus en plus utilisĂ©s dans diffĂ©rentes applications telles que la CAO, les jeux ou la tĂ©lĂ©opĂ©ration. L'Ă©volution des performances matĂ©rielles des Smartphones a conduit Ă  l'introduction des applications 3D sur les appareils mobiles. En outre, les Smartphones offrent de nouvelles capacitĂ©s bien au-delĂ  de la communication vocale traditionnelle qui sont consentis par l'intĂ©gritĂ© d'une grande variĂ©tĂ© de capteurs et par la connectivitĂ© via Internet. En consĂ©quence, plusieurs intĂ©ressantes applications 3D peuvent ĂȘtre conçues en permettant aux capacitĂ©s de l'appareil d'interagir dans un EV 3D. Sachant que les Smartphones ont de petits et aplatis Ă©crans et que EV 3D est large, dense et contenant un grand nombre de cibles de tailles diffĂ©rentes, les appareils mobiles prĂ©sentent certaines contraintes d'interaction dans l'EV 3D comme : la densitĂ© de l'environnement, la profondeur de cibles et l'occlusion. La tĂąche de sĂ©lection fait face Ă  ces trois problĂšmes pour sĂ©lectionner une cible. De plus, la tĂąche de sĂ©lection peut ĂȘtre dĂ©composĂ©e en trois sous-tĂąches : la Navigation, le Pointage et la Validation. En consĂ©quence, les chercheurs dans un environnement virtuel 3D ont dĂ©veloppĂ© de nouvelles techniques et mĂ©taphores pour l'interaction en 3D afin d'amĂ©liorer l'utilisation des applications 3D sur les appareils mobiles, de maintenir la tĂąche de sĂ©lection et de faire face aux problĂšmes ou facteurs affectant la performance de sĂ©lection. En tenant compte de ces considĂ©rations, cette thĂšse expose un Ă©tat de l'art des techniques de sĂ©lection existantes dans un EV 3D et des techniques de sĂ©lection sur Smartphone. Il expose les techniques de sĂ©lection dans un EV 3D structurĂ©es autour des trois sous-tĂąches de sĂ©lection: navigation, pointage et validation. En outre, il dĂ©crit les techniques de dĂ©sambiguĂŻsation permettant de sĂ©lectionner une cible parmi un ensemble d'objets prĂ©sĂ©lectionnĂ©s. UltĂ©rieurement, il expose certaines techniques d'interaction dĂ©crites dans la littĂ©rature et conçues pour ĂȘtre implĂ©menter sur un Smartphone. Ces techniques sont divisĂ©es en deux groupes : techniques effectuant des tĂąches de sĂ©lection bidimensionnelle sur un Smartphone et techniques exĂ©cutant des tĂąches de sĂ©lection tridimensionnelle sur un Smartphone. Enfin, nous exposons les techniques qui utilisaient le Smartphone comme un pĂ©riphĂ©rique de saisie. Ensuite, nous discuterons la problĂ©matique de sĂ©lection dans un EV 3D affichĂ©e sur un Smartphone. Il expose les trois problĂšmes identifiĂ©s de sĂ©lection : la densitĂ© de l'environnement, la profondeur des cibles et l'occlusion. Ensuite, il Ă©tablit l'amĂ©lioration offerte par chaque technique existante pour la rĂ©solution des problĂšmes de sĂ©lection. Il analyse les atouts proposĂ©s par les diffĂ©rentes techniques, la maniĂšre dont ils Ă©liminent les problĂšmes, leurs avantages et leurs inconvĂ©nients. En outre, il illustre la classification des techniques de sĂ©lection pour un EV 3D en fonction des trois problĂšmes discutĂ©s (densitĂ©, profondeur et occlusion) affectant les performances de sĂ©lection dans un environnement dense Ă  3D. Hormis pour les jeux vidĂ©o, l'utilisation d'environnement virtuel 3D sur Smartphone n'est pas encore dĂ©mocratisĂ©e. Ceci est dĂ» au manque de techniques d'interaction proposĂ©es pour interagir avec un dense EV 3D composĂ© de nombreux objets proches les uns des autres et affichĂ©s sur un petit Ă©cran aplati et les problĂšmes de sĂ©lection pour afficher l' EV 3D sur un petit Ă©cran plutĂŽt sur un grand Ă©cran. En consĂ©quence, cette thĂšse se concentre sur la proposition et la description du fruit de cette Ă©tude : la technique d'interaction DichotoZoom. Elle compare et Ă©value la technique proposĂ©e Ă  la technique de circulation suggĂ©rĂ©e par la littĂ©rature. L'analyse comparative montre l'efficacitĂ© de la technique DichotoZoom par rapport Ă  sa contrepartie. Ensuite, DichotoZoom a Ă©tĂ© Ă©valuĂ© selon les diffĂ©rentes modalitĂ©s d'interaction disponibles sur les Smartphones. Cette Ă©valuation montre la performance de la technique de sĂ©lection proposĂ©e basĂ©e sur les quatre modalitĂ©s d'interaction suivantes : utilisation de boutons physiques ou sous forme de composants graphiques, utilisation d'interactions gestuelles via l'Ă©cran tactile ou le dĂ©placement de l'appareil lui-mĂȘme. Enfin, cette thĂšse Ă©numĂšre nos contributions dans le domaine des techniques d'interaction 3D utilisĂ©es dans un environnement virtuel 3D dense affichĂ© sur de petits Ă©crans et propose des travaux futurs.3D Virtual Environments (3D VE) are more and more used in different applications such as CAD, games, or teleoperation. Due to the improvement of smartphones hardware performance, 3D applications were also introduced to mobile devices. In addition, smartphones provide new computing capabilities far beyond the traditional voice communication. They are permitted by the variety of built-in sensors and the internet connectivity. In consequence, interesting 3D applications can be designed by enabling the device capabilities to interact in a 3D VE. Due to the fact that smartphones have small and flat screens and that a 3D VE is wide and dense with a large number of targets of various sizes, mobile devices present some constraints in interacting on the 3D VE like: the environment density, the depth of targets and the occlusion. The selection task faces these three problems to select a target. In addition, the selection task can be decomposed into three subtasks: Navigation, Pointing and Validation. In consequence, researchers in 3D virtual environment have developed new techniques and metaphors for 3D interaction to improve 3D application usability on mobile devices, to support the selection task and to face the problems or factors affecting selection performance. In light of these considerations, this thesis exposes a state of the art of the existing selection techniques in 3D VE and the selection techniques on smartphones. It exposes the selection techniques in 3D VE structured around the selection subtasks: navigation, pointing and validation. Moreover, it describes disambiguation techniques providing the selection of a target from a set of pre-selected objects. Afterward, it exposes some interaction techniques described in literature and designed for implementation on Smartphone. These techniques are divided into two groups: techniques performing two-dimensional selection tasks on smartphones, and techniques performing three-dimensional selection tasks on smartphones. Finally, we expose techniques that used the smartphone as an input device. Then, we will discuss the problematic of selecting in 3D VE displayed on a Smartphone. It exposes the three identified selection problems: the environment density, the depth of targets and the occlusion. Afterward, it establishes the enhancement offered by each existing technique in solving the selection problems. It analysis the assets proposed by different techniques, the way they eliminates the problems, their advantages and their inconvenient. Furthermore, it illustrates the classification of the selection techniques for 3D VE according to the three discussed problems (density, depth and occlusion) affecting the selection performance in a dense 3D VE. Except for video games, the use of 3D virtual environment (3D VE) on Smartphone has not yet been popularized. This is due to the lack of interaction techniques to interact with a dense 3D VE composed of many objects close to each other and displayed on a small and flat screen and the selection problems to display the 3D VE on a small screen rather on a large screen. Accordingly, this thesis focuses on defining and describing the fruit of this study: DichotoZoom interaction technique. It compares and evaluates the proposed technique to the Circulation technique, suggested by the literature. The comparative analysis shows the effectiveness of DichotoZoom technique compared to its counterpart. Then, DichotoZoom was evaluated in different modalities of interaction available on Smartphones. It reports on the performance of the proposed selection technique based on the following four interaction modalities: using physical buttons, using graphical buttons, using gestural interactions via touchscreen or moving the device itself. Finally, this thesis lists our contributions to the field of 3D interaction techniques used in a dense 3D virtual environment displayed on small screens and proposes some future works
    corecore