106 research outputs found

    Interactive Visualization Lenses:: Natural Magic Lens Interaction for Graph Visualization

    Get PDF
    Information visualization is an important research field concerned with making sense and inferring knowledge from data collections. Graph visualizations are specific techniques for data representation relevant in diverse application domains among them biology, software-engineering, and business finance. These data visualizations benefit from the display space provided by novel interactive large display environments. However, these environments also cause new challenges and result in new requirements regarding the need for interaction beyond the desktop and according redesign of analysis tools. This thesis focuses on interactive magic lenses, specialized locally applied tools that temporarily manipulate the visualization. These may include magnification of focus regions but also more graph-specific functions such as pulling in neighboring nodes or locally reducing edge clutter. Up to now, these lenses have mostly been used as single-user, single-purpose tools operated by mouse and keyboard. This dissertation presents the extension of magic lenses both in terms of function as well as interaction for large vertical displays. In particular, this thesis contributes several natural interaction designs with magic lenses for the exploration of graph data in node-link visualizations using diverse interaction modalities. This development incorporates flexible switches between lens functions, adjustment of individual lens properties and function parameters, as well as the combination of lenses. It proposes interaction techniques for fluent multi-touch manipulation of lenses, controlling lenses using mobile devices in front of large displays, and a novel concept of body-controlled magic lenses. Functional extensions in addition to these interaction techniques convert the lenses to user-configurable, personal territories with use of alternative interaction styles. To create the foundation for this extension, the dissertation incorporates a comprehensive design space of magic lenses, their function, parameters, and interactions. Additionally, it provides a discussion on increased embodiment in tool and controller design, contributing insights into user position and movement in front of large vertical displays as a result of empirical investigations and evaluations.Informationsvisualisierung ist ein wichtiges Forschungsfeld, das das Analysieren von Daten unterstützt. Graph-Visualisierungen sind dabei eine spezielle Variante der Datenrepräsentation, deren Nutzen in vielerlei Anwendungsfällen zum Einsatz kommt, u.a. in der Biologie, Softwareentwicklung und Finanzwirtschaft. Diese Datendarstellungen profitieren besonders von großen Displays in neuen Displayumgebungen. Jedoch bringen diese Umgebungen auch neue Herausforderungen mit sich und stellen Anforderungen an Nutzerschnittstellen jenseits der traditionellen Ansätze, die dadurch auch Anpassungen von Analysewerkzeugen erfordern. Diese Dissertation befasst sich mit interaktiven „Magischen Linsen“, spezielle lokal-angewandte Werkzeuge, die temporär die Visualisierung zur Analyse manipulieren. Dabei existieren zum Beispiel Vergrößerungslinsen, aber auch Graph-spezifische Manipulationen, wie das Anziehen von Nachbarknoten oder das Reduzieren von Kantenüberlappungen im lokalen Bereich. Bisher wurden diese Linsen vor allem als Werkzeug für einzelne Nutzer mit sehr spezialisiertem Effekt eingesetzt und per Maus und Tastatur bedient. Die vorliegende Doktorarbeit präsentiert die Erweiterung dieser magischen Linsen, sowohl in Bezug auf die Funktionalität als auch für die Interaktion an großen, vertikalen Displays. Insbesondere trägt diese Dissertation dazu bei, die Exploration von Graphen mit magischen Linsen durch natürliche Interaktion mit unterschiedlichen Modalitäten zu unterstützen. Dabei werden flexible Änderungen der Linsenfunktion, Anpassungen von individuellen Linseneigenschaften und Funktionsparametern, sowie die Kombination unterschiedlicher Linsen ermöglicht. Es werden Interaktionstechniken für die natürliche Manipulation der Linsen durch Multitouch-Interaktion, sowie das Kontrollieren von Linsen durch Mobilgeräte vor einer Displaywand vorgestellt. Außerdem wurde ein neuartiges Konzept körpergesteuerter magischer Linsen entwickelt. Funktionale Erweiterungen in Kombination mit diesen Interaktionskonzepten machen die Linse zu einem vom Nutzer einstellbaren, persönlichen Arbeitsbereich, der zudem alternative Interaktionsstile erlaubt. Als Grundlage für diese Erweiterungen stellt die Dissertation eine umfangreiche analytische Kategorisierung bisheriger Forschungsarbeiten zu magischen Linsen vor, in der Funktionen, Parameter und Interaktion mit Linsen eingeordnet werden. Zusätzlich macht die Arbeit Vor- und Nachteile körpernaher Interaktion für Werkzeuge bzw. ihre Steuerung zum Thema und diskutiert dabei Nutzerposition und -bewegung an großen Displaywänden belegt durch empirische Nutzerstudien

    Extending touch with eye gaze input

    Get PDF
    Direct touch manipulation with displays has become one of the primary means by which people interact with computers. Exploration of new interaction methods that work in unity with the standard direct manipulation paradigm will be of bene t for the many users of such an input paradigm. In many instances of direct interaction, both the eyes and hands play an integral role in accomplishing the user's interaction goals. The eyes visually select objects, and the hands physically manipulate them. In principle this process includes a two-step selection of the same object: users rst look at the target, and then move their hand to it for the actual selection. This thesis explores human-computer interactions where the principle of direct touch input is fundamentally changed through the use of eye-tracking technology. The change we investigate is a general reduction to a one-step selection process. The need to select using the hands can be eliminated by utilising eye-tracking to enable users to select an object of interest using their eyes only, by simply looking at it. Users then employ their hands for manipulation of the selected object, however they can manipulate it from anywhere as the selection is rendered independent of the hands. When a spatial o set exists between the hands and the object, the user's manual input is indirect. This allows users to manipulate any object they see from any manual input position. This fundamental change can have a substantial e ect on the many human-computer interactions that involve user input through direct manipulation, such as temporary touchscreen interactions. However it is unclear if, when, and how it can become bene cial to users of such an interaction method. To approach these questions, our research in this topic is guided by the following two propositions. The rst proposition is that gaze input can transform a direct input modality such as touch to an indirect modality, and with it provide new and powerful interaction capabilities. We develop this proposition in context of our investigation on integrated gaze interactions within direct manipulation user interfaces. We rst regard eye gaze for generic multi-touch displays, introducing Gaze-Touch as a technique based on the division of labour: gaze selects and touch manipulates. We investigate this technique with a design space analysis, protyping of application examples, and an informal user evaluation. The proposition is further developed by an exploration of hybrid eye and hand inputs with a stylus, for precise and cursor based indirect control; with bimanual input, to rapidly issue input from two hands to gaze-selected objects; with tablets, where Gaze-Touch enables one-handed interaction across the whole screen with the same hand that holds the device; and free-hand gesture in virtual reality to interact with any viewed object at a distance located in the virtual scene. Overall, we demonstrate that using eye gaze to enable indirect input yields many interaction bene ts, such as whole-screen reachability, occlusion-free manipulation, high precision cursor input, and low physical e ort. Integration of eye gaze with manual input raises new questions about how it can complement, instead of replace, the direct interactions users are familiar with. This is important to allow users the choice between direct and indirect inputs as each a ords distinct pros and cons for the usability of human-computer interfaces. These two input forms are normally considered separately from each other, but here we investigate interactions that combine those within the same interface. In this context, the second proposition is that gaze and touch input enables new and seamless ways of combining direct and indirect forms of interaction. We develop this proposition by regarding multiple interaction tasks that a user usually perform in a sequence, or simultaneously. First, we introduce a method to enable users switching between both input forms by implicitly exploiting visual attention during manual input. Direct input is active when looking at the input, and otherwise users will manipulate the object they look at indirectly. A design application for typical drawing and vector-graphics tasks has been prototyped to illustrate and explore this principle. The application contributes many example use cases, where direct drawing activities are complemented with indirect menu actions, precise cursor inputs, and seamless context switching at a glance. We further develop the proposition by investigating simultaneous direct and indirect input by bimanual input, where each input is assigned to one hand. We present an empirical study with an in-depth analysis of using indirect navigation in one hand, and direct pen drawing on the other. We extend this input constellation to tablet devices, by designing compound techniques for use in a more naturalistic setting when one hand holds the device. The interactions show that many typical tablet scenarios, such as browsing, map navigation, homescreen selections, or image gallery, can be enhanced through exploiting eye gaze

    Kosketuskäyttöliittymän toteuttaminen olemassa olevaan ohjelmaan

    Get PDF
    The purpose of this work was to evaluate the migration steps of a windowing desktop application into a touch based input enabled software. The study was conducted on an already existing building information modelling software called Tekla BIMsight. The task was to retain all the functionality already in the software while making it possible to be used on touch-enabled devices, such as tablets or convertible laptops with a swivel display. Design and implementation of the system has been documented as part of the thesis, as well as most problematic issues during this period. The effects of the implementation are validated and tested with real users and the results from that study were documented. The usability study was conducted to obtain quantitative and qualitative metrics of the usability. The nature of the input mechanism, direct or indirect, affects the user experience greatly. The final system should be as responsive as possible to maintain a good level of perceived performance. Early prototyping and access to the target devices is critical to the success of a migration process. There are several common mistakes that should be avoided in the design and implementation phases. Not all the problems were critical, but many of them were identified as very cumbersome for the user that would affect the positive user experience of the software. With each new context for a user interface the problems need to be solved again and only experience from such solutions can help alleviate this task. The implemented touch support can be verified to meet the set requirements very well: It allows the system to be used on touch based input environments and all the major user interface elements support this.Työn tarkoituksena oli toteuttaa ja arvioida toimenpiteet ja. menetelmät joilla olemassa olevaan käyttöliittymään voidaan lisätä tuki kosketuskäytölle. Ominaisuudet lisättiin rakennusten tietomallinnuksen tarkasteluohjelmaan, Tekla BIMsight. Tehtävänä oli säilyttää kaikki aiemmat toiminnot ja tehdä ohjelmasta tehokkaasti käytettävä kosketuslaitteilla, kuten tableteilla ja kääntyvällä näytöllä varustetuilla kannettavilla. Suunnittelu ja toteutus järjestelmälle on dokumentoitu työssä ja kaikkein vaativimmat ongelmat. Toteutetun tuen vaikutuksia arvioitiin oikeiden käyttäjien kanssa tehdyssä käyttäjätutkimuksessa, jonka tulokset on esitetty. Käytettävyystutkimuksella hankittiin kvantitatiivista ja kvalitatiivista tietoa tuotteesta. Laite jolla ohjelmistoa käytetään vaikuttaa ohjelmasta saatuun käyttökokemukseen merkittävästi. Hyvän käyttökokemuksen saavuttamiseksi lopullisen järjestelmän käytön tulisi olla sujuvaa. Aikaisten prototyyppien kokeilu ja kohdelaitteiden saatavuus ovat tärkeitä tekijöitä siirtymäprosessin kannalta. Yleisiä ongelmatilanteita ja haasteita joita kohdattiin suunnittelu- ja toteutusvaiheissa on listattu työssä. Loppukäyttäjän kannalta useat ongelmat olivat rasittavia ja vaikuttaisivat käyttökokemukseen negatiivisesti jos niitä ei korjata. Uuden käyttöympäristön tuomat ongelmat joudutaan ratkaisemaan aina uudestaan. Vain kokemuksella vastaavista tilanteista on merkittävästi etua itse ratkaisujen löytämiselle. Toteutetun kosketuskäyttöliittymän tuen voidaan todeta vastaavan sille asetettuja tavoitteita ja vaatimuksia hyvin; se mahdollistaa ohjelman käyttämisen kosketuskäyttöliittymän omaavissa laitteissa ja kaikkein merkittävimmät käyttöliittymäelementit on tuettuina

    Bringing the Physical to the Digital

    Get PDF
    This dissertation describes an exploration of digital tabletop interaction styles, with the ultimate goal of informing the design of a new model for tabletop interaction. In the context of this thesis the term digital tabletop refers to an emerging class of devices that afford many novel ways of interaction with the digital. Allowing users to directly touch information presented on large, horizontal displays. Being a relatively young field, many developments are in flux; hardware and software change at a fast pace and many interesting alternative approaches are available at the same time. In our research we are especially interested in systems that are capable of sensing multiple contacts (e.g., fingers) and richer information such as the outline of whole hands or other physical objects. New sensor hardware enable new ways to interact with the digital. When embarking into the research for this thesis, the question which interaction styles could be appropriate for this new class of devices was a open question, with many equally promising answers. Many everyday activities rely on our hands ability to skillfully control and manipulate physical objects. We seek to open up different possibilities to exploit our manual dexterity and provide users with richer interaction possibilities. This could be achieved through the use of physical objects as input mediators or through virtual interfaces that behave in a more realistic fashion. In order to gain a better understanding of the underlying design space we choose an approach organized into two phases. First, two different prototypes, each representing a specific interaction style – namely gesture-based interaction and tangible interaction – have been implemented. The flexibility of use afforded by the interface and the level of physicality afforded by the interface elements are introduced as criteria for evaluation. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops is analyzed based on these criteria. In a second stage the learnings from these initial explorations are applied to inform the design of a novel model for digital tabletop interaction. This model is based on the combination of rich multi-touch sensing and a three dimensional environment enriched by a gaming physics simulation. The proposed approach enables users to interact with the virtual through richer quantities such as collision and friction. Enabling a variety of fine-grained interactions using multiple fingers, whole hands and physical objects. Our model makes digital tabletop interaction even more “natural”. However, because the interaction – the sensed input and the displayed output – is still bound to the surface, there is a fundamental limitation in manipulating objects using the third dimension. To address this issue, we present a technique that allows users to – conceptually – pick objects off the surface and control their position in 3D. Our goal has been to define a technique that completes our model for on-surface interaction and allows for “as-direct-as possible” interactions. We also present two hardware prototypes capable of sensing the users’ interactions beyond the table’s surface. Finally, we present visual feedback mechanisms to give the users the sense that they are actually lifting the objects off the surface. This thesis contributes on various levels. We present several novel prototypes that we built and evaluated. We use these prototypes to systematically explore the design space of digital tabletop interaction. The flexibility of use afforded by the interaction style is introduced as criterion alongside the user interface elements’ physicality. Each approaches’ suitability to support the highly dynamic and often unstructured interactions typical for digital tabletops are analyzed. We present a new model for tabletop interaction that increases the fidelity of interaction possible in such settings. Finally, we extend this model so to enable as direct as possible interactions with 3D data, interacting from above the table’s surface

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces

    Get PDF
    Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    Gesture Interaction at a Distance

    Get PDF
    The aim of this work is to explore, from a perspective of human behavior, which\ud gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances
    corecore