83 research outputs found

    Real-Time Markerless Tracking the Human Hands for 3D Interaction

    Get PDF
    This thesis presents methods for enabling suitable human computer interaction using only movements of the bare human hands in free space. This kind of interaction is natural and intuitive, particularly because actions familiar to our everyday life can be reflected. Furthermore, the input is contact-free which is of great advantage e.g. in medical applications due to hygiene factors. For enabling the translation of hand movements to control signals an automatic method for tracking the pose and/or posture of the hand is needed. In this context the simultaneous recognition of both hands is desirable to allow for more natural input. The first contribution of this thesis is a novel video-based method for real-time detection of the positions and orientations of both bare human hands in four different predefined postures, respectively. Based on such a system novel interaction interfaces can be developed. However, the design of such interfaces is a non-trivial task. Additionally, the development of novel interaction techniques is often mandatory in order to enable the design of efficient and easily operable interfaces. To this end, several novel interaction techniques are presented and investigated in this thesis, which solve existing problems and substantially improve the applicability of such a new device. These techniques are not restricted to this input instrument and can also be employed to improve the handling of other interaction devices. Finally, several new interaction interfaces are described and analyzed to demonstrate possible applications in specific interaction scenarios.Markerlose Verfolgung der menschlichen HĂ€nde in Echtzeit fĂŒr 3D Interaktion In der vorliegenden Arbeit werden Verfahren dargestellt, die sinnvolle Mensch- Maschine-Interaktionen nur durch Bewegungen der bloßen HĂ€nde in freiem Raum ermöglichen. Solche "natĂŒrlichen" Interaktionen haben den besonderen Vorteil, dass alltĂ€gliche und vertraute Handlungen in die virtuelle Umgebung ĂŒbertragen werden können. Außerdem werden auf diese Art berĂŒhrungslose Eingaben ermöglicht, nĂŒtzlich z.B. wegen hygienischer Aspekte im medizinischen Bereich. Um Handbewegungen in Steuersignale umsetzen zu können, ist zunĂ€chst ein automatisches Verfahren zur Erkennung der Lage und/oder der Art der mit der Hand gebildeten Geste notwendig. Dabei ist die gleichzeitige Erfassung beider HĂ€nde wĂŒnschenswert, um die Eingaben möglichst natĂŒrlich gestalten zu können. Der erste Beitrag dieser Arbeit besteht aus einer neuen videobasierten Methode zur unmittelbaren Erkennung der Positionen und Orientierungen beider HĂ€nde in jeweils vier verschiedenen, vordefinierten Gesten. Basierend auf einem solchen Verfahren können neuartige Interaktionsschnittstellen entwickelt werden. Allerdings ist die Ausgestaltung solcher Schnittstellen keinesfalls trivial. Im Gegenteil ist bei einer neuen Art der Interaktion meist sogar die Entwicklung neuer Interaktionstechniken erforderlich, damit ĂŒberhaupt effiziente und gut bedienbare Schnittstellen konzipiert werden können. Aus diesem Grund wurden in dieser Arbeit einige neue Interaktionstechniken entwickelt und untersucht, die vorhandene Probleme beheben und die Anwendbarkeit eines solchen Eingabeinstruments fĂŒr bestimmte Arten der Interaktion verbessern oder ĂŒberhaupt erst ermöglichen. Diese Techniken sind nicht auf dieses Eingabeinstrument beschrĂ€nkt und können durchaus auch die Handhabung anderer EingabegerĂ€te verbessern. Des Weiteren werden mehrere neue Interaktionsschnittstellen prĂ€sentiert, die den möglichen Einsatz bloßhĂ€ndiger Interaktion in verschiedenen, typischen Anwendungsgebieten veranschaulichen

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Måster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Improving Multi-Touch Interactions Using Hands as Landmarks

    Get PDF
    Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Interactions gestuelles multi-point et gĂ©omĂ©trie dĂ©formable pour l’édition 3D sur Ă©cran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.MalgrĂ© les progrĂšs en capture d’objets rĂ©els et en gĂ©nĂ©ration procĂ©durale, la crĂ©ation de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thĂšse propose d’exploiter les nouvelles technologies tactiles (Ă©crans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et dĂ©former des objets 3D.En premier lieu, nous Ă©tudions les possibilitĂ© et les limitations gestuelles de la main et des doigts lors d’une interaction sur Ă©cran tactile afin de dĂ©couvrir quels gestes semblent les plus adaptĂ©s Ă  l’édition des environnements et des objets 3D. En particulier, nous Ă©valuons le nombre de degrĂ© de libertĂ© efficaces d’une main humaine lorsque son geste est contraint Ă  une surface plane. Nous proposons Ă©galement une nouvelle mĂ©thode d’analyse gestuelle par phases permettant d’identifier en temps rĂ©el les mouvements clĂ©s de la main et des doigts. Ces rĂ©sultats, combinĂ©s Ă  plusieurs Ă©tudes utilisateur spĂ©cifiques, dĂ©bouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de camĂ©ra), mais aussi placement, rotation et mise Ă  l’échelle des objets. Ce patron est Ă©tendudans un second temps aux dĂ©formations complexes (ajout et suppression de matiĂšre ainsi que courbure ou torsion des objets, avec contrĂŽle de la localitĂ©). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou dĂ©formation) et des tĂąches correspondantes est automatiquement gĂ©rĂ© par le systĂšme en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous Ă©tendons cette interface pour y intĂ©grer des dĂ©formations plus complexe Ă  travers le transfert de vĂȘtements d’un personnage Ă  un autre, qui est Ă©tendu pour permettre la dĂ©formation interactive du vĂȘtement lorsque le personnage qui le porte est dĂ©formĂ© par interaction tactile

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    Systematic literature review of hand gestures used in human computer interaction interfaces

    Get PDF
    Gestures, widely accepted as a humans' natural mode of interaction with their surroundings, have been considered for use in human-computer based interfaces since the early 1980s. They have been explored and implemented, with a range of success and maturity levels, in a variety of fields, facilitated by a multitude of technologies. Underpinning gesture theory however focuses on gestures performed simultaneously with speech, and majority of gesture based interfaces are supported by other modes of interaction. This article reports the results of a systematic review undertaken to identify characteristics of touchless/in-air hand gestures used in interaction interfaces. 148 articles were reviewed reporting on gesture-based interaction interfaces, identified through searching engineering and science databases (Engineering Village, Pro Quest, Science Direct, Scopus and Web of Science). The goal of the review was to map the field of gesture-based interfaces, investigate the patterns in gesture use, and identify common combinations of gestures for different combinations of applications and technologies. From the review, the community seems disparate with little evidence of building upon prior work and a fundamental framework of gesture-based interaction is not evident. However, the findings can help inform future developments and provide valuable information about the benefits and drawbacks of different approaches. It was further found that the nature and appropriateness of gestures used was not a primary factor in gesture elicitation when designing gesture based systems, and that ease of technology implementation often took precedence

    User-based gesture vocabulary for form creation during a product design process

    Get PDF
    There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only.There are inconsistencies between the nature of the conceptual design and the functionalities of the computational systems supporting it, which disrupt the designers’ process, focusing on technology rather than designers’ needs. A need for elicitation of hand gestures appropriate for the requirements of the conceptual design, rather than those arbitrarily chosen or focusing on ease of implementation was identified.The aim of this thesis is to identify natural and intuitive hand gestures for conceptual design, performed by designers (3rd, 4th year product design engineering students and recent graduates) working on their own, without instruction and without limitations imposed by the facilitating technology. This was done via a user centred study including 44 participants. 1785 gestures were collected. Gestures were explored as a sole mean for shape creation and manipulation in virtual 3D space. Gestures were identified, described in writing, sketched, coded based on the taxonomy used, categorised based on hand form and the path travelled and variants identified. Then they were statistically analysed to ascertain agreement rates between the participants, significance of the agreement and the likelihood of number of repetitions for each category occurring by chance. The most frequently used and statistically significant gestures formed the consensus set of vocabulary for conceptual design. The effect of the shape of the manipulated object on the gesture performed, and if the sequence of the gestures participants proposed was different from the established CAD solid modelling practices were also observed.Vocabulary was evaluated by non-designer participants, and the outcomes have shown that the majority of gestures were appropriate and easy to perform. Evaluation was performed theoretically and in the VR environment. Participants selected their preferred gestures for each activity, and a variant of the vocabulary for conceptual design was created as an outcome, that aims to ensure that extensive training is not required, extending the ability to design beyond trained designers only

    Understanding interaction mechanics in touchless target selection

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration

    The tool space

    Get PDF
    Visions of futuristic desktop computer work spaces have often incorporated large interactive surfaces that either integrate into or replace the prevailing desk setup with displays, keyboard and mouse. Such visions often connote the distinct characteristics of direct touch interaction, e.g. by transforming the desktop into a large touch screen that allows interacting with content using one’s bare hands. However, the role of interactive surfaces for desktop computing may not be restricted to enabling direct interaction. Especially for prolonged interaction times, the separation of visual focus and manual input has proven to be ergonomic and is usually supported by vertical monitors and separate – hence indirect – input devices placed on the horizontal desktop. If we want to maintain this ergonomically matured style of computing with the introduction of interactive desktop displays, the following question arises: How can and should this novel input and output modality affect prevailing interaction techniques. While touch input devices have been used for decades in desktop computing as track pads or graphic tablets, the dynamic rendering of content and increasing physical dimensions of novel interactive surfaces open up new design opportunities for direct, indirect and hybrid touch input techniques. Informed design decisions require a careful consideration of the relationship between input sensing, visual display and applied interaction styles. Previous work in the context of desktop computing has focused on understanding the dual-surface setup as a holistic unit that supports direct touch input and allows the seamless transfer of objects across horizontal and vertical surfaces. In contrast, this thesis assumes separate spaces for input (horizontal input space) and output (vertical display space) and contributes to the understanding of how interactive surfaces can enrich indirect input for complex tasks, such as 3D modeling or audio editing. The contribution of this thesis is threefold: First, we present a set of case studies on user interface design for dual-surface computer workspaces. These case studies cover several application areas such as gaming, music production and analysis or collaborative visual layout and comprise formative evaluations. On the one hand, these case studies highlight the conflict that arises when the direct touch interaction paradigm is applied to dual-surface workspaces. On the other hand, they indicate how the deliberate avoidance of established input devices (i.e. mouse and keyboard) leads to novel design ideas for indirect touch-based input. Second, we introduce our concept of the tool space as an interaction model for dual-surface workspaces, which is derived from a theoretical argument and the previous case studies. The tool space dynamically renders task-specific input areas that enable spatial command activation and increase input bandwidth through leveraging multi-touch and two-handed input. We further present evaluations of two concept implementations in the domains 3D modeling and audio editing which demonstrate the high degrees of control, precision and sense of directness that can be achieved with our tools. Third, we present experimental results that inform the design of the tool space input areas. In particular, we contribute a set of design recommendations regarding the understanding of two-handed indirect multi-touch input and the impact of input area form factors on spatial cognition and navigation performance.Zukunftsvisionen thematisieren zuweilen neuartige, auf großen interaktiven OberflĂ€chen basierende ComputerarbeitsplĂ€tze, wobei etablierte PC-Komponenten entweder ersetzt oder erweitert werden. Oft schwingt bei derartigen Konzepten die Idee von natĂŒrlicher oder direkter Toucheingabe mit, die es beispielsweise erlaubt mit den Fingern direkt auf virtuelle Objekte auf einem großen Touchscreen zuzugreifen. Die Eingabe auf interaktiven OberflĂ€chen muss aber nicht auf direkte Interaktionstechniken beschrĂ€nkt sein. Gerade bei lĂ€ngerer Benutzung ist aus ergonomischer Sicht eine Trennung von visuellem Fokus und manueller Eingabe von Vorteil, wie es zum Beispiel bei der Verwendung von Monitoren und den gĂ€ngigen EingabegerĂ€ten der Fall ist. Soll diese Art der Eingabe auch bei ComputerarbeitsplĂ€tzen unterstĂŒtzt werden, die auf interaktiven OberflĂ€chen basieren, dann stellt sich folgende Frage: Wie wirken sich die neuen Ein- und AusgabemodalitĂ€ten auf vorherrschende Interaktionstechniken aus? Toucheingabe kommt beim klassischen Desktop-Computing schon lange zur Anwendung: Im Gegensatz zu sogenannten Trackpads oder Grafiktabletts eröffnen neue interaktive OberflĂ€chen durch ihre visuellen Darstellungsmöglichkeiten und ihre GrĂ¶ĂŸe neue Möglichkeiten fĂŒr das Design von direkten, indirekten oder hybriden Eingabetechniken. Fundierte Designentscheidungen erfordern jedoch eine sorgfĂ€ltige Auseinandersetzung mit Ein- und Ausgabetechnologien sowie adequaten Interaktionsstilen. Verwandte Forschungsarbeiten haben sich auf eine konzeptuelle Vereinheitlichung von Arbeitsbereichen konzentriert, die es beispielsweise erlaubt, digitale Objekte mit dem Finger zwischen horizontalen und vertikalen Arbeitsbereichen zu verschieben. Im Gegensatz dazu geht die vorliegende Arbeit von logisch und rĂ€umlich getrennten Bereichen aus: Die horizontale interaktive OberflĂ€che dient primĂ€r zur Eingabe, wĂ€hrend die vertikale als Display fungiert. Insbesondere trĂ€gt diese Arbeit zu einem VerstĂ€ndnis bei, wie durch eine derartige Auffassung interaktiver OberflĂ€chen komplexe Aufgaben, wie zum Beispiel 3D-Modellierung oder Audiobearbeitung auf neue und gewinnbringende Art und Weise unterstĂŒtzt werden können. Der wissenschaftliche Beitrag der vorliegenden Arbeit lĂ€sst sich in drei Bereiche gliedern: ZunĂ€chst werden Fallstudien prĂ€sentiert, die anhand konkreter Anwendungen (z.B. Spiele, Musikproduktion, kollaboratives Layout) neuartige Nutzerschnittstellen fĂŒr ComputerarbeitsplĂ€tze explorieren und evaluieren, die horizontale und vertikale interaktive OberflĂ€chen miteinander verbinden. Einerseits verdeutlichen diese Fallstudien verschiedene Konflikte, die bei der Anwendung von direkter Toucheingabe an solchen ComputerarbeitsplĂ€tzen hervorgerufen werden. Andererseits zeigen sie auf, wie der bewusste Verzicht auf etablierte EingabegerĂ€te zu neuen Toucheingabe-Konzepten fĂŒhren kann. In einem zweiten Schritt wird das Toolspace-Konzept als Interaktionsmodell fĂŒr ComputerarbeitsplĂ€tze vorgestellt, die auf einem Verbund aus horizontaler und vertikaler interaktiver OberflĂ€che bestehen. Dieses Modell ergibt sich aus den vorangegangenen Fallstudien und wird zusĂ€tzlich theoretisch motiviert. Der Toolspace stellt anwendungsspezifische und dynamische EingabeflĂ€chen dar, die durch rĂ€umliche Aktivierung und die UnterstĂŒtzung beidhĂ€ndiger Multitouch-Eingabe die Eingabebandbreite erhöhen. Diese Idee wird anhand zweier Fallstudien illustriert und evaluiert, die zeigen, dass dadurch ein hohes Maß an Kontrolle und Genauigkeit erreicht sowie ein GefĂŒhl von Direktheit vermittelt wird. Zuletzt werden Studienergebnisse vorgestellt, die Erkenntnisse zum Entwurf von EingabeflĂ€chen im Tool Space liefern, insbesondere zu den Themen beidhĂ€ndige indirekte Multitouch-Eingabe sowie zum Einfluss von Formfaktoren auf rĂ€umliche Kognition und Navigation
    • 

    corecore