10 research outputs found
Laid-Back, Touchless Collaboration around Wall-size Displays: Visual Feedback and Affordances
Abstract To facilitate interaction and collaboration around ultrahigh-resolution, Wall-Size Displays (WSD), post-WIMP interaction modes like touchless and multi-touch have opened up new, unprecedented opportunities. Yet to fully harness this potential, we still need to understand fundamental design factors for successful WSD experiences. Some of these include visual feedback for touchless interactions, novel interface affordances for at-a-distance, high-bandwidth input, and the technosocial ingredients supporting laid-back, relaxed collaboration around WSDs. This position paper highlights our progress in a long-term research program that examines these issues and spurs new, exciting research directions. We recently completed a study aimed at investigating the properties of visual feedback in touchless WSD interaction, and we discuss some of our findings here. Our work exemplifies how research in WSD interaction calls for re-conceptualizing basic, first principles of Human-Computer Interaction (HCI) to pioneer a suite of next-generation interaction environments
When paper meets multi-touch : a study of multi-modal interactions in air traffic controls
International audienceWhen multiple modes of interaction are available, it is not obvious whether combining these technologies necessarily leads to a better user experience. It can be difficult to determine which modes are most appropriate for each interaction. However, complex activities such as air traffic control require multiple interaction techniques and modalities. As a result, in this paper, we study the technical challenges of adding finger detection to an augmented flight strip board used by air traffic controllers. We use our augmented strip board to evaluate interactions based on touch, digital pen and physical paper objects. From our user study, we find that users are able to quickly adapt to an interface that offers such a wide range of modalities. The availability of different modalities did not overburden the users and they did not find it difficult to determine the appropriate modality to use for each interaction
Indian Language Compatible Intelligent User Interfaces
Implicit Human Computer Interaction to explicit context-dependent knowledge processing to meet the requirements of intelligent human being in the present day situation has provoked scientist towards the development of Intelligent User Interfaces to face the society with a demand of smart devices, embedded to the smart environment and dynamic activities. Context-aware situation deals with location and situation focused context like virtual car navigation to context with respect to traffic to climate to the human connected with it immediately and afterwards
Interacción humano-computador en escenarios educativos y artÃsticos. Kinect como propuesta viable
The possibilities and tools offered by a computer system are becoming more advanced, thorough and comprehensive. With these advancements, the potential for interaction with modern computer systems have been expanding and taking routes and ways that a few years ago were not considered plausible. Since the initial input-output data outlines, through the use punch cards, switches, light alerts or printing systems, to the development of the keyboard and mouse, interactions using multitouch and natural interactions diagrams, human-computer interaction is a field of constant growth. However, there is a gap in terms of interaction schemes for artistic creation and teaching scenarios. This paper provides a brief overview of the requirements, characteristics and needs of a solution to this gap, and proposes the use of Kinect as a device of human-machine interaction for the scenarios described, taking into account accessibility robustness, ease of use and modification, as well as the wide range of possibilities.Las posibilidades y herramientas que ofrece un sistema computacional son cada vez más avanzadas, amplias y completas. Con el avance, en este sentido, las posibilidades de interacción con los sistemas computacionales modernos han venido ampliándose y tomando rutas y formas que hace apenas unos años no se consideraban plausibles. Desde los esquemas iniciales de entrada-salida de datos, pasando por el uso tarjetas perforadas o interruptores y alertas luminosas o sistemas de impresión, hasta el desarrollo del teclado y el mouse, y las interacciones que utilizan esquemas multitoque e interacciones naturales; la interacción humanocomputador es un campo de crecimiento constante. Existe sin embargo un vacÃo en términos de esquemas de interacción para escenarios de creación artÃstica y de enseñanza. En este trabajo se presenta una breve reseña de los requerimientos, caracterÃsticas y necesidades de una solución a este vacÃo, y se propone la utilización de Kinect como dispositivo de interacción humano-máquina para los escenarios descritos, teniendo en cuenta su accesibilidad, robustez, facilidad de uso y modificación, asà como la amplÃsima gama de posibilidades que ofrece
When Paper Meets Multi-touch: A Study of Multi-modal Interactions in Air Traffic Control
Part 1: Long and Short Papers (Continued); International audience; For expert interfaces, it is not obvious whether providing multiple modes of interaction, each tuned to different sub-tasks, leads to a better user experience than providing a more limited set. In this paper, we investigate this question in the context of air traffic control. We present and analyze an augmented flight strip board offering several forms of interaction, including touch, digital pen and physical paper objects. We explore the technical challenges of adding finger detection to such a flight strip board and evaluate how expert air traffic controllers interact with the resulting system. We find that users are able to quickly adapt to the wide range of offered modalities. Users were not overburden by the choice of different modalities, and did not find it difficult to determine the appropriate modality to use for each interaction.
Document type: Part of book or chapter of boo
Multi-touch interaction for interface prototyping
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
Designing Discoverable Digital Tabletop Menus for Public Settings
Ease of use with digital tabletops in public settings is contingent on how well the system invites and guides interaction. The same can be said for the interface design and individual graphical user interface elements of these systems. One such interface element is menus. Prior to a menu being used however, it must first be discovered within the interface. Existing research pertaining to digital tabletop menu design does not address this issue of discovering or opening a menu. This thesis investigates how the interface and interaction of digital tabletops can be designed to encourage menu discoverability in the context of public settings.
A set of menu invocation designs varying on the invocation element and use of animation are proposed. These designs are then evaluated through an observational study at a museum to observe users interactions in a realistic public setting. Findings from this study propose the use of discernible and recognizable interface elements – buttons – supported by the use of animation to attract and guide users as a discoverable menu invocation design. Additionally, findings posit that when engaging with a public digital tabletop display, users transition through exploration and discovery states before becoming competent with the system. Finally, insights from this study point to a set of design recommendations for improving menu discoverability
Software Usability
This volume delivers a collection of high-quality contributions to help broaden developers’ and non-developers’ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users
Understanding interaction mechanics in touchless target selection
Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration
Recommended from our members
Perceptible affordances and feedforward for gestural interfaces: Assessing effectiveness of gesture acquisition with unfamiliar interactions
The move towards touch-based interfaces disrupts the established ways in which users manipulate and control graphical user interfaces. The predominant mode of interaction established by the desktop interface is to ‘double-click’ an icon in order to open an application, file or folder. Icons show users where to click and their shape, colour and graphic style suggests how they respond to user action. In sharp contrast, in a touch-based interface, an action may require a user to form a gesture with a certain number of fingers, a particular movement, and in a specific place. Often, none of this is suggested in the interface.
This thesis adopts the approach of research through design to address the problem of how to inform the user about which gestures are available in a given touch-based interface, how to perform each gesture, and, finally, the effect of each gesture on the underlying system. Its hypothesis is that presenting automatic and animated visual prompts that depict touch and preview gesture execution will mitigate the problems users encounter when they execute commands within unfamiliar gestural interfaces. Moreover, the thesis claims the need for a new framework to assess the efficiency of gestural UI designs. A significant aspect of this new framework is a rating system that was used to assess distinct phases within the users’ evaluation and execution of a gesture.
In order to support the thesis hypothesis, two empirical studies were conducted. The first introduces the visual prompts in support of training participants in unfamiliar gestures and gauges participants’ interpretation of their meaning. The second study consolidates the design features that yielded fewer error rates in the first study and assesses different interaction techniques, such as the moment to display the visual prompt. Both studies demonstrate the benefits in providing visual prompts to improve user awareness of available gestures. In addition, both studies confirm the efficiency of the rating system in identifying the most common problems users have with gestures and identifying possible design features to mitigate such problems.
The thesis contributes: 1) a gesture-and-effect model and a corresponding rating system that can be used to assess gestural user interfaces, 2) the identification of common problems users have with unfamiliar gestural interfaces and design recommendations to mitigate these problems, and 3) a novel design technique that will improve user awareness of unfamiliar gestures within novel gestural interfaces