886 research outputs found

    Study of the interaction with a virtual 3D environment displayed on a smartphone

    Get PDF
    Les environnements virtuels Ă  3D (EV 3D) sont de plus en plus utilisĂ©s dans diffĂ©rentes applications telles que la CAO, les jeux ou la tĂ©lĂ©opĂ©ration. L'Ă©volution des performances matĂ©rielles des Smartphones a conduit Ă  l'introduction des applications 3D sur les appareils mobiles. En outre, les Smartphones offrent de nouvelles capacitĂ©s bien au-delĂ  de la communication vocale traditionnelle qui sont consentis par l'intĂ©gritĂ© d'une grande variĂ©tĂ© de capteurs et par la connectivitĂ© via Internet. En consĂ©quence, plusieurs intĂ©ressantes applications 3D peuvent ĂȘtre conçues en permettant aux capacitĂ©s de l'appareil d'interagir dans un EV 3D. Sachant que les Smartphones ont de petits et aplatis Ă©crans et que EV 3D est large, dense et contenant un grand nombre de cibles de tailles diffĂ©rentes, les appareils mobiles prĂ©sentent certaines contraintes d'interaction dans l'EV 3D comme : la densitĂ© de l'environnement, la profondeur de cibles et l'occlusion. La tĂąche de sĂ©lection fait face Ă  ces trois problĂšmes pour sĂ©lectionner une cible. De plus, la tĂąche de sĂ©lection peut ĂȘtre dĂ©composĂ©e en trois sous-tĂąches : la Navigation, le Pointage et la Validation. En consĂ©quence, les chercheurs dans un environnement virtuel 3D ont dĂ©veloppĂ© de nouvelles techniques et mĂ©taphores pour l'interaction en 3D afin d'amĂ©liorer l'utilisation des applications 3D sur les appareils mobiles, de maintenir la tĂąche de sĂ©lection et de faire face aux problĂšmes ou facteurs affectant la performance de sĂ©lection. En tenant compte de ces considĂ©rations, cette thĂšse expose un Ă©tat de l'art des techniques de sĂ©lection existantes dans un EV 3D et des techniques de sĂ©lection sur Smartphone. Il expose les techniques de sĂ©lection dans un EV 3D structurĂ©es autour des trois sous-tĂąches de sĂ©lection: navigation, pointage et validation. En outre, il dĂ©crit les techniques de dĂ©sambiguĂŻsation permettant de sĂ©lectionner une cible parmi un ensemble d'objets prĂ©sĂ©lectionnĂ©s. UltĂ©rieurement, il expose certaines techniques d'interaction dĂ©crites dans la littĂ©rature et conçues pour ĂȘtre implĂ©menter sur un Smartphone. Ces techniques sont divisĂ©es en deux groupes : techniques effectuant des tĂąches de sĂ©lection bidimensionnelle sur un Smartphone et techniques exĂ©cutant des tĂąches de sĂ©lection tridimensionnelle sur un Smartphone. Enfin, nous exposons les techniques qui utilisaient le Smartphone comme un pĂ©riphĂ©rique de saisie. Ensuite, nous discuterons la problĂ©matique de sĂ©lection dans un EV 3D affichĂ©e sur un Smartphone. Il expose les trois problĂšmes identifiĂ©s de sĂ©lection : la densitĂ© de l'environnement, la profondeur des cibles et l'occlusion. Ensuite, il Ă©tablit l'amĂ©lioration offerte par chaque technique existante pour la rĂ©solution des problĂšmes de sĂ©lection. Il analyse les atouts proposĂ©s par les diffĂ©rentes techniques, la maniĂšre dont ils Ă©liminent les problĂšmes, leurs avantages et leurs inconvĂ©nients. En outre, il illustre la classification des techniques de sĂ©lection pour un EV 3D en fonction des trois problĂšmes discutĂ©s (densitĂ©, profondeur et occlusion) affectant les performances de sĂ©lection dans un environnement dense Ă  3D. Hormis pour les jeux vidĂ©o, l'utilisation d'environnement virtuel 3D sur Smartphone n'est pas encore dĂ©mocratisĂ©e. Ceci est dĂ» au manque de techniques d'interaction proposĂ©es pour interagir avec un dense EV 3D composĂ© de nombreux objets proches les uns des autres et affichĂ©s sur un petit Ă©cran aplati et les problĂšmes de sĂ©lection pour afficher l' EV 3D sur un petit Ă©cran plutĂŽt sur un grand Ă©cran. En consĂ©quence, cette thĂšse se concentre sur la proposition et la description du fruit de cette Ă©tude : la technique d'interaction DichotoZoom. Elle compare et Ă©value la technique proposĂ©e Ă  la technique de circulation suggĂ©rĂ©e par la littĂ©rature. L'analyse comparative montre l'efficacitĂ© de la technique DichotoZoom par rapport Ă  sa contrepartie. Ensuite, DichotoZoom a Ă©tĂ© Ă©valuĂ© selon les diffĂ©rentes modalitĂ©s d'interaction disponibles sur les Smartphones. Cette Ă©valuation montre la performance de la technique de sĂ©lection proposĂ©e basĂ©e sur les quatre modalitĂ©s d'interaction suivantes : utilisation de boutons physiques ou sous forme de composants graphiques, utilisation d'interactions gestuelles via l'Ă©cran tactile ou le dĂ©placement de l'appareil lui-mĂȘme. Enfin, cette thĂšse Ă©numĂšre nos contributions dans le domaine des techniques d'interaction 3D utilisĂ©es dans un environnement virtuel 3D dense affichĂ© sur de petits Ă©crans et propose des travaux futurs.3D Virtual Environments (3D VE) are more and more used in different applications such as CAD, games, or teleoperation. Due to the improvement of smartphones hardware performance, 3D applications were also introduced to mobile devices. In addition, smartphones provide new computing capabilities far beyond the traditional voice communication. They are permitted by the variety of built-in sensors and the internet connectivity. In consequence, interesting 3D applications can be designed by enabling the device capabilities to interact in a 3D VE. Due to the fact that smartphones have small and flat screens and that a 3D VE is wide and dense with a large number of targets of various sizes, mobile devices present some constraints in interacting on the 3D VE like: the environment density, the depth of targets and the occlusion. The selection task faces these three problems to select a target. In addition, the selection task can be decomposed into three subtasks: Navigation, Pointing and Validation. In consequence, researchers in 3D virtual environment have developed new techniques and metaphors for 3D interaction to improve 3D application usability on mobile devices, to support the selection task and to face the problems or factors affecting selection performance. In light of these considerations, this thesis exposes a state of the art of the existing selection techniques in 3D VE and the selection techniques on smartphones. It exposes the selection techniques in 3D VE structured around the selection subtasks: navigation, pointing and validation. Moreover, it describes disambiguation techniques providing the selection of a target from a set of pre-selected objects. Afterward, it exposes some interaction techniques described in literature and designed for implementation on Smartphone. These techniques are divided into two groups: techniques performing two-dimensional selection tasks on smartphones, and techniques performing three-dimensional selection tasks on smartphones. Finally, we expose techniques that used the smartphone as an input device. Then, we will discuss the problematic of selecting in 3D VE displayed on a Smartphone. It exposes the three identified selection problems: the environment density, the depth of targets and the occlusion. Afterward, it establishes the enhancement offered by each existing technique in solving the selection problems. It analysis the assets proposed by different techniques, the way they eliminates the problems, their advantages and their inconvenient. Furthermore, it illustrates the classification of the selection techniques for 3D VE according to the three discussed problems (density, depth and occlusion) affecting the selection performance in a dense 3D VE. Except for video games, the use of 3D virtual environment (3D VE) on Smartphone has not yet been popularized. This is due to the lack of interaction techniques to interact with a dense 3D VE composed of many objects close to each other and displayed on a small and flat screen and the selection problems to display the 3D VE on a small screen rather on a large screen. Accordingly, this thesis focuses on defining and describing the fruit of this study: DichotoZoom interaction technique. It compares and evaluates the proposed technique to the Circulation technique, suggested by the literature. The comparative analysis shows the effectiveness of DichotoZoom technique compared to its counterpart. Then, DichotoZoom was evaluated in different modalities of interaction available on Smartphones. It reports on the performance of the proposed selection technique based on the following four interaction modalities: using physical buttons, using graphical buttons, using gestural interactions via touchscreen or moving the device itself. Finally, this thesis lists our contributions to the field of 3D interaction techniques used in a dense 3D virtual environment displayed on small screens and proposes some future works

    An Arm-Mounted Accelerometer and Gyro-Based 3D Control System

    Get PDF
    This thesis examines the performance of a wearable accelerometer/gyroscope-based system for capturing arm motions in 3D. Two experiments conforming to ISO 9241-9 specifications for non-keyboard input devices were performed. The first, modeled after the Fitts' law paradigm described in ISO 9241-9, utilized the wearable system to control a telemanipulator compared with joystick control and the user's arm. The throughputs were 5.54 bits/s, 0.74 bits/s and 0.80 bits/s, respectively. The second experiment utilized the wearable system to control a cursor in a 3D fish-tank virtual reality setup. The participants performed a 3D Fitts' law task with three selection methods: button clicks, dwell, and a twist gesture. Error rates were 6.82 %, 0.00% and 3.59 % respectively. Throughput ranged from 0.8 to 1.0 bits/s. The thesis includes detailed analyses on lag and other issues that present user interface challenges for systems that employ human-mounted sensor inputs to control a telemanipulator apparatus

    Improving command selection in smart environments by exploiting spatial constancy

    Get PDF
    With the a steadily increasing number of digital devices, our environments are becoming increasingly smarter: we can now use our tablets to control our TV, access our recipe database while cooking, and remotely turn lights on and off. Currently, this Human-Environment Interaction (HEI) is limited to in-place interfaces, where people have to walk up to a mounted set of switches and buttons, and navigation-based interaction, where people have to navigate on-screen menus, for example on a smart-phone, tablet, or TV screen. Unfortunately, there are numerous scenarios in which neither of these two interaction paradigms provide fast and convenient access to digital artifacts and system commands. People, for example, might not want to touch an interaction device because their hands are dirty from cooking: they want device-free interaction. Or people might not want to have to look at a screen because it would interrupt their current task: they want system-feedback-free interaction. Currently, there is no interaction paradigm for smart environments that allows people for these kinds of interactions. In my dissertation, I introduce Room-based Interaction to solve this problem of HEI. With room-based interaction, people associate digital artifacts and system commands with real-world objects in the environment and point toward these real-world proxy objects for selecting the associated digital artifact. The design of room-based interaction is informed by a theoretical analysis of navigation- and pointing-based selection techniques, where I investigated the cognitive systems involved in executing a selection. An evaluation of room-based interaction in three user studies and a comparison with existing HEI techniques revealed that room-based interaction solves many shortcomings of existing HEI techniques: the use of real-world proxy objects makes it easy for people to learn the interaction technique and to perform accurate pointing gestures, and it allows for system-feedback-free interaction; the use of the environment as flat input space makes selections fast; the use of mid-air full-arm pointing gestures allows for device-free interaction and increases awareness of other’s interactions with the environment. Overall, I present an alternative selection paradigm for smart environments that is superior to existing techniques in many common HEI-scenarios. This new paradigm can make HEI more user-friendly, broaden the use cases of smart environments, and increase their acceptance for the average user

    Anamorphosis reformed: from optical illusions to immersive perspectives

    Get PDF
    We discuss a definition of conical anamorphosis that sets it at the foundation of both classical and curvilinear perspectives. In this view, anamorphosis is an equivalence relation between three-dimensional objects, which includes two-dimensional representatives, not necessarily flat. Vanishing points are defined in a canonical way that is maximally symmetric, with exactly two vanishing points for every line. The definition of the vanishing set works at the level of anamorphosis, before perspective is defined, with no need for a projection surface. Finally, perspective is defined as a flat representation of the visual data in the anamorphosis. This schema applies to both linear and curvilinear perspectives and is naturally adapted to immersive perspectives, such as the spherical perspectives. Mathematically, the view here presented is that the sphere and not the projective plane is the natural manifold of visual data up to anamorphic equivalence. We consider how this notion of anamorphosis may help to dispel some long-standing philosophical misconceptions regarding the nature of perspective.info:eu-repo/semantics/publishedVersio

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an PopularitĂ€t gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von RealitĂ€t und VirtualitĂ€t kombinieren. WĂ€hrend die Technologie sowohl fĂŒr Eingabe- als auch fĂŒr AusgabegerĂ€te marktreif ist, existieren nur wenige Lösungen fĂŒr den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen ĂŒber Leistung und BenutzerprĂ€ferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten BenutzeroberflĂ€chen fĂŒr VR zu einer großen Herausforderung. Diese Arbeit beschĂ€ftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingefĂŒhrt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und BenutzerprĂ€ferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und MenĂŒsteuerung im Kontext des tĂ€glichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    3D Pointing with Everyday Devices: Speed, Occlusion, Fatigue

    Get PDF
    In recent years, display technology has evolved to the point where displays can be both non-stereoscopic and stereoscopic, and 3D environments can be rendered realistically on many types of displays. From movie theatres and shopping malls to conference rooms and research labs, 3D information can be deployed seamlessly. Yet, while 3D environments are commonly displayed in desktop settings, there are virtually no examples of interactive 3D environments deployed within ubiquitous environments, with the exception of console gaming. At the same time, immersive 3D environments remain - in users' minds - associated with professional work settings and virtual reality laboratories. An excellent opportunity for 3D interactive engagements is being missed not because of economic factors, but due to the lack of interaction techniques that are easy to use in ubiquitous, everyday environments. In my dissertation, I address the lack of support for interaction with 3D environments in ubiquitous settings by designing, implementing, and evaluating 3D pointing techniques that leverage a smartphone or a smartwatch as an input device. I show that mobile and wearable devices may be especially beneficial as input devices for casual use scenarios, where specialized 3D interaction hardware may be impractical, too expensive or unavailable. Such scenarios include interactions with home theatres, intelligent homes, in workplaces and classrooms, with movie theatre screens, in shopping malls, at airports, during conference presentations and countless other places and situations. Another contribution of my research is to increase the potential of mobile and wearable devices for efficient interaction at a distance. I do so by showing that such interactions are feasible when realized with the support of a modern smartphone or smartwatch. I also show how multimodality, when realized with everyday devices, expands and supports 3D pointing. In particular, I show how multimodality helps to address the challenges of 3D interaction: performance issues related to the limitations of the human motor system, interaction with occluded objects and related problem of perception of depth on non-stereoscopic screens, and user subjective fatigue, measured with NASA TLX as perceived workload, that results from providing spatial input for a prolonged time. I deliver these contributions by designing three novel 3D pointing techniques that support casual, "walk-up-and-use" interaction at a distance and are fully realizable using off-the-shelf mobile and wearable devices available today. The contributions provide evidence that democratization of 3D interaction can be realized by leveraging the pervasiveness of a device that users already carry with them: a smartphone or a smartwatch.4 month

    “Duration-object” A materialist animation to perceive time in the form of folded extra dimensions

    Get PDF
    Time flies out of reach of the human’s sensory organ, apparently because time never stops and no two instants coexist. But string theory says past, present and future might coexist in extra dimensions. If that is the case, hyperspaces’ perception of time can stand as a supplement to the known. If our perception of time is only a shadow which is cast onto our three-dimensional space, what will the hidden appearance of time be like? From a philosophical aspect, this dissertation follows Henri Bergson’s theory and reinvestigates his “duration” concept within the context of an extra dimension. In addition, I follow Gilles Deleuze’s attempt to represent duration, he was confronted with a technical limitation of cinema and proposed the concept: “movement-image” to complete duration in the rule of consciousness. Following that, I neologize “duration-object” as a probe to represent duration in a physical form by collapsing time into the three dimensions. My duration-object concept aims at building a bodily-connection between the hyper-dimensional manifestation of time and spectators. Case studies of related artworks position the duration-object as a materialist animation by spotting the mechanism in pre-cinema apparatus, the video installation art spectatorship and the opening up of kinetic art in it. In my work, sculpture-animation, a volumetric display of movement, is utilised to construct “duration-object”. When the movement which inhabits within my duration-object is unfolded by the moving light plane, a simultaneous multi-dimensional manifestation of time will be provided when the perception rests in the realm of the body. This thesis intends to propose a possible approach to enable people to seize time in the framework of hyperspaces with their sensory system

    Planning dextrous robot hand grasps from range data, using preshapes and digit trajectories

    Get PDF
    Dextrous robot hands have many degrees of freedom. This enables the manipulation of objects between the digits of the dextrous hand but makes grasp planning substantially more complex than for parallel jaw grippers. Much of the work that addresses grasp planning for dextrous hands concentrates on the selection of contact sites to optimise stability criteria and ignores the kinematics of the hand. In more complete systems, the paradigm of preshaping has emerged as dominant. However, the criteria for the formation and placement of the preshapes have not been adequately examined, and the usefulness of the systems is therefore limited to grasping simple objects for which preshapes can be formed using coarse heuristics.In this thesis a grasp metric based on stability and kinematic feasibility is introduced. The preshaping paradigm is extended to include consideration of the trajectories that the digits take during closure from preshape to final grasp. The resulting grasp family is dependent upon task requirements and is designed for a set of "ideal" object-hand configurations. The grasp family couples the degrees of freedom of the dextrous hand in an anthropomorphic manner; the resulting reduction in freedom makes the grasp planning less complex. Grasp families are fitted to real objects by optimisation of the grasp metric; this corresponds to fitting the real object-hand configuration as close to the ideal as possible. First, the preshape aperture, which defines the positions of the fingertips in the preshape, is found by optimisation of an approximation to the grasp metric (which makes simplifying assumptions about the digit trajectories and hand kinematics). Second, the full preshape kinematics and digit closure trajectories are calculated to optimise the full grasp metric.Grasps are planned on object models built from laser striper range data from two viewpoints. A surface description of the object is used to prune the space of possible contact sites and to allow the accurate estimation of normals, which is required by the grasp metric to estimate the amount of friction required. A voxel description, built by ray-casting, is used to check for collisions between the object and the robot hand using an approximation to the Euclidean distance transform.Results are shown in simulation for a 3-digit hand model, designed to be like a simplified human hand in terms of its size and functionality. There are clear extensions of the method to any dextrous hand with a single thumb opposing multiple fingers and several different hand models that could be used are described. Grasps are planned on a wide variety of curved and polyhedral object

    Characterizing the Effects of Local Latency on Aim Performance in First Person Shooters

    Get PDF
    Real-time games such as first-person shooters (FPS) are sensitive to even small amounts of lag. The effects of network latency have been studied, but less is known about local latency -- that is, the lag caused by local sources such as input devices, displays, and the application. While local latency is important to gamers, we do not know how it affects aiming performance and whether we can reduce its negative effects. To explore these issues, we tested local latency in a variety of real-world gaming systems and carried out a controlled study focusing on targeting and tracking activities in an FPS game with varying degrees of local latency. In addition, we tested the ability of a lag compensation technique (based on aim assistance) to mitigate the negative effects. To motivate the need for these studies, we also examined how aim in FPS differs from pointing in standard 2D tasks, showing significant differences in performance metrics. Our studies found local latencies in the real-world range from 23 to 243~ms that cause significant and substantial degradation in performance (even for latencies as low as 41~ms). The studies also showed that our compensation technique worked well, reducing the problems caused by lag in the case of targeting, and removing the problem altogether in the case of tracking. Our work shows that local latency is a real and substantial problem -- but game developers can mitigate the problem with appropriate compensation methods
    • 

    corecore