35 research outputs found

    Tangible Viewports: Getting Out of Flatland in Desktop Environments

    Get PDF
    International audienceAbstract Spatial augmented reality and tangible interaction enrich the standard computer I/O space. Systems based on such modalities offer new user experiences and open up interesting perspectives in various fields. On the other hand, such systems tend to live outside the standard desktop paradigm and, as a consequence, they do not benefit from the richness and versatility of desktop environments. In this work, we propose to join together physical visualization and tangible interaction within a standard desktop environment. We introduce the concept of Tangible Viewport, an on-screen window that creates a dynamic link between augmented objects and computer screens, allowing a screen-based cursor to move onto the object in a seamless manner. We describe an implementation of this concept and explore the interaction space around it. A preliminary evaluation shows the metaphor is transparent to the users while providing the benefits of tangibility

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    A user interface for terrain modelling in virtual reality using a head mounted display

    Get PDF
    The increased commercial availability of virtual reality (VR) devices has resulted in more content being created for virtual environments (VEs). This content creation has mainly taken place using traditional desktop systems but certain applications are now integrating VR into the creation pipeline. Therefore we look at the effectiveness of creating content, specifically designing terrains, for use in immersive environments using VR technology. To do this, we develop a VR interface for terrain creation based on an existing desktop application. The interface incorporates a head-mounted display and 6 degree of freedom controllers. This allows the mapping of user controls to more natural movements compared to the abstract controls in mouse and keyboard based systems. It also means that users can view the terrain in full 3D due to the inherent stereoscopy of the VR display. The interface goes through three iterations of user centred design and testing. This results in paper and low fidelity prototypes being created before the final interface is developed. The performance of this final VR interface is then compared to the desktop interface on which it was based. We carry out user tests to assess the performance of each interface in terms of speed, accuracy and usability. From our results we find that there is no significant difference between the interfaces when it comes to accuracy but that the desktop interface is superior in terms of speed while the VR interface was rated as having higher usability. Some of the possible reasons for these results, such as users preferring the natural interactions offered by the VR interface but not having sufficient training to fully take advantage of it, are discussed. Finally, we conclude that while it was not shown that either interface is clearly superior, there is certainly room for further exploration of this research area. Recommendations for how to incorporate lessons learned during the creation of this dissertation into any further research are also made

    A Symmetric Interaction Model for Bimanual Input

    Get PDF
    People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks

    The tool space

    Get PDF
    Visions of futuristic desktop computer work spaces have often incorporated large interactive surfaces that either integrate into or replace the prevailing desk setup with displays, keyboard and mouse. Such visions often connote the distinct characteristics of direct touch interaction, e.g. by transforming the desktop into a large touch screen that allows interacting with content using one’s bare hands. However, the role of interactive surfaces for desktop computing may not be restricted to enabling direct interaction. Especially for prolonged interaction times, the separation of visual focus and manual input has proven to be ergonomic and is usually supported by vertical monitors and separate – hence indirect – input devices placed on the horizontal desktop. If we want to maintain this ergonomically matured style of computing with the introduction of interactive desktop displays, the following question arises: How can and should this novel input and output modality affect prevailing interaction techniques. While touch input devices have been used for decades in desktop computing as track pads or graphic tablets, the dynamic rendering of content and increasing physical dimensions of novel interactive surfaces open up new design opportunities for direct, indirect and hybrid touch input techniques. Informed design decisions require a careful consideration of the relationship between input sensing, visual display and applied interaction styles. Previous work in the context of desktop computing has focused on understanding the dual-surface setup as a holistic unit that supports direct touch input and allows the seamless transfer of objects across horizontal and vertical surfaces. In contrast, this thesis assumes separate spaces for input (horizontal input space) and output (vertical display space) and contributes to the understanding of how interactive surfaces can enrich indirect input for complex tasks, such as 3D modeling or audio editing. The contribution of this thesis is threefold: First, we present a set of case studies on user interface design for dual-surface computer workspaces. These case studies cover several application areas such as gaming, music production and analysis or collaborative visual layout and comprise formative evaluations. On the one hand, these case studies highlight the conflict that arises when the direct touch interaction paradigm is applied to dual-surface workspaces. On the other hand, they indicate how the deliberate avoidance of established input devices (i.e. mouse and keyboard) leads to novel design ideas for indirect touch-based input. Second, we introduce our concept of the tool space as an interaction model for dual-surface workspaces, which is derived from a theoretical argument and the previous case studies. The tool space dynamically renders task-specific input areas that enable spatial command activation and increase input bandwidth through leveraging multi-touch and two-handed input. We further present evaluations of two concept implementations in the domains 3D modeling and audio editing which demonstrate the high degrees of control, precision and sense of directness that can be achieved with our tools. Third, we present experimental results that inform the design of the tool space input areas. In particular, we contribute a set of design recommendations regarding the understanding of two-handed indirect multi-touch input and the impact of input area form factors on spatial cognition and navigation performance.Zukunftsvisionen thematisieren zuweilen neuartige, auf großen interaktiven Oberflächen basierende Computerarbeitsplätze, wobei etablierte PC-Komponenten entweder ersetzt oder erweitert werden. Oft schwingt bei derartigen Konzepten die Idee von natürlicher oder direkter Toucheingabe mit, die es beispielsweise erlaubt mit den Fingern direkt auf virtuelle Objekte auf einem großen Touchscreen zuzugreifen. Die Eingabe auf interaktiven Oberflächen muss aber nicht auf direkte Interaktionstechniken beschränkt sein. Gerade bei längerer Benutzung ist aus ergonomischer Sicht eine Trennung von visuellem Fokus und manueller Eingabe von Vorteil, wie es zum Beispiel bei der Verwendung von Monitoren und den gängigen Eingabegeräten der Fall ist. Soll diese Art der Eingabe auch bei Computerarbeitsplätzen unterstützt werden, die auf interaktiven Oberflächen basieren, dann stellt sich folgende Frage: Wie wirken sich die neuen Ein- und Ausgabemodalitäten auf vorherrschende Interaktionstechniken aus? Toucheingabe kommt beim klassischen Desktop-Computing schon lange zur Anwendung: Im Gegensatz zu sogenannten Trackpads oder Grafiktabletts eröffnen neue interaktive Oberflächen durch ihre visuellen Darstellungsmöglichkeiten und ihre Größe neue Möglichkeiten für das Design von direkten, indirekten oder hybriden Eingabetechniken. Fundierte Designentscheidungen erfordern jedoch eine sorgfältige Auseinandersetzung mit Ein- und Ausgabetechnologien sowie adequaten Interaktionsstilen. Verwandte Forschungsarbeiten haben sich auf eine konzeptuelle Vereinheitlichung von Arbeitsbereichen konzentriert, die es beispielsweise erlaubt, digitale Objekte mit dem Finger zwischen horizontalen und vertikalen Arbeitsbereichen zu verschieben. Im Gegensatz dazu geht die vorliegende Arbeit von logisch und räumlich getrennten Bereichen aus: Die horizontale interaktive Oberfläche dient primär zur Eingabe, während die vertikale als Display fungiert. Insbesondere trägt diese Arbeit zu einem Verständnis bei, wie durch eine derartige Auffassung interaktiver Oberflächen komplexe Aufgaben, wie zum Beispiel 3D-Modellierung oder Audiobearbeitung auf neue und gewinnbringende Art und Weise unterstützt werden können. Der wissenschaftliche Beitrag der vorliegenden Arbeit lässt sich in drei Bereiche gliedern: Zunächst werden Fallstudien präsentiert, die anhand konkreter Anwendungen (z.B. Spiele, Musikproduktion, kollaboratives Layout) neuartige Nutzerschnittstellen für Computerarbeitsplätze explorieren und evaluieren, die horizontale und vertikale interaktive Oberflächen miteinander verbinden. Einerseits verdeutlichen diese Fallstudien verschiedene Konflikte, die bei der Anwendung von direkter Toucheingabe an solchen Computerarbeitsplätzen hervorgerufen werden. Andererseits zeigen sie auf, wie der bewusste Verzicht auf etablierte Eingabegeräte zu neuen Toucheingabe-Konzepten führen kann. In einem zweiten Schritt wird das Toolspace-Konzept als Interaktionsmodell für Computerarbeitsplätze vorgestellt, die auf einem Verbund aus horizontaler und vertikaler interaktiver Oberfläche bestehen. Dieses Modell ergibt sich aus den vorangegangenen Fallstudien und wird zusätzlich theoretisch motiviert. Der Toolspace stellt anwendungsspezifische und dynamische Eingabeflächen dar, die durch räumliche Aktivierung und die Unterstützung beidhändiger Multitouch-Eingabe die Eingabebandbreite erhöhen. Diese Idee wird anhand zweier Fallstudien illustriert und evaluiert, die zeigen, dass dadurch ein hohes Maß an Kontrolle und Genauigkeit erreicht sowie ein Gefühl von Direktheit vermittelt wird. Zuletzt werden Studienergebnisse vorgestellt, die Erkenntnisse zum Entwurf von Eingabeflächen im Tool Space liefern, insbesondere zu den Themen beidhändige indirekte Multitouch-Eingabe sowie zum Einfluss von Formfaktoren auf räumliche Kognition und Navigation

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    Shaping 3-D Volumes in Immersive Virtual Environments

    Get PDF

    Interactions gestuelles multi-point et géométrie déformable pour l’édition 3D sur écran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.Malgré les progrès en capture d’objets réels et en génération procédurale, la création de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thèse propose d’exploiter les nouvelles technologies tactiles (écrans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et déformer des objets 3D.En premier lieu, nous étudions les possibilité et les limitations gestuelles de la main et des doigts lors d’une interaction sur écran tactile afin de découvrir quels gestes semblent les plus adaptés à l’édition des environnements et des objets 3D. En particulier, nous évaluons le nombre de degré de liberté efficaces d’une main humaine lorsque son geste est contraint à une surface plane. Nous proposons également une nouvelle méthode d’analyse gestuelle par phases permettant d’identifier en temps réel les mouvements clés de la main et des doigts. Ces résultats, combinés à plusieurs études utilisateur spécifiques, débouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de caméra), mais aussi placement, rotation et mise à l’échelle des objets. Ce patron est étendudans un second temps aux déformations complexes (ajout et suppression de matière ainsi que courbure ou torsion des objets, avec contrôle de la localité). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou déformation) et des tâches correspondantes est automatiquement géré par le système en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous étendons cette interface pour y intégrer des déformations plus complexe à travers le transfert de vêtements d’un personnage à un autre, qui est étendu pour permettre la déformation interactive du vêtement lorsque le personnage qui le porte est déformé par interaction tactile
    corecore