1,365 research outputs found

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    Factors influencing visual attention switch in multi-display user interfaces: a survey

    Get PDF
    Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin

    RealTimeChess: Lessons from a Participatory Design Process for a Collaborative Multi-Touch, Multi-User Game

    Get PDF
    We report on a long-term participatory design process during which we designed and improved RealTimeChess, a collaborative but competitive game that is played using touch input by multiple people on a tabletop display. During the design process we integrated concurrent input from all players and pace control, allowing us to steer the interaction along a continuum between high-paced simultaneous and low-paced turn-based gameplay. In addition, we integrated tutorials for teaching interaction techniques, mechanisms to control territoriality, remote interaction, and alert feedback. Integrating these mechanism during the participatory design process allowed us to examine their effects in detail, revealing for instance effects of the competitive setting on the perception of awareness as well as territoriality. More generally, the resulting application provided us with a testbed to study interaction on shared tabletop surfaces and yielded insights important for other time-critical or attention-demanding applications.

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    AN EXPLORATORY STUDY IN INTERACTIVE CAR CATALOGUE SYSTEM ON TABLETOP DISPLAY SYSTEM

    Get PDF
    This report covers on the implementation of tabletop tablet to display interactive catalogue system in the car industry. This project is a prove of concept indicating that the multi touch techniques are really useful in car industry as the user can direct manipulate sense of touch on viewing the car catalogue. This is proved when car purchasing activity or car road show take place. It focuses on the background on the catalogue whereby less interactive and low in usability discussed. The prime objective of this project is to investigate whether by having tabletop tablet will add and induce usability via user collaboration enabling more than one user to perform moving, resizing, zooming and rotating the car catalogue projected on the tabletop. On the literature section, it had been mention details of the architectural, design and application component. It also findings and readings on the multi gestural techniques, natural user interfaces (NUI) and the multi touch development platform. On the methodology part touches on the timeline and period how the project being carried out. Attached together the Gantt chart and flow chart on the event flow and task schedule. Discussion and result section talks about the development of the project and outcome of it. Description and explanation was included on how the multi-touch application being developed integrated with the entire component. Discussion regarding the system advantages, recommendation for future opportunity and weakness included in second last section. The recommendation described and explained taking into account of the system weakness and further improvement on the further coming years. Last section is the conclusion, discussing on the hope and key aspect achieved throughout the software development and progress

    Collaborative searching for video using the Físchlár system and a DiamondTouch table

    Get PDF
    Fischlar DT is one of a family of systems which support interactive searching and browsing through an archive of digital video information. Previous Fischlar systems have used a conventional screen, keyboard and mouse interface, but Fischlar-DT operates with using a horizontal, multiuser, touch sensitive tabletop known as a DiamondTouch. We present the Fischlar-DT system partly from a systems perspective, but mostly in terms of how its design and functionality supports collaborative searching. The contribution of the paper is thus the introduction of Fischlar-DT and a description of how design concerns for supporting collaborative search can be realised on a tabletop interface

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit

    Full text link
    We present 3DTouch, a novel 3D wearable input device worn on the fingertip for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D input device that is self-contained, mobile, and universally working across various 3D platforms. This paper presents a low-cost solution to designing and implementing such a device. Our approach relies on relative positioning technique using an optical laser sensor and a 9-DOF inertial measurement unit. 3DTouch is self-contained, and designed to universally work on various 3D platforms. The device employs touch input for the benefits of passive haptic feedback, and movement stability. On the other hand, with touch interaction, 3DTouch is conceptually less fatiguing to use over many hours than 3D spatial input devices. We propose a set of 3D interaction techniques including selection, translation, and rotation using 3DTouch. An evaluation also demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure

    Relative and Absolute Mappings for Rotating Remote 3D Objects on Multi-Touch Tabletops

    Get PDF
    The use of human fingers as an object selection and manipulation tool has raised significant challenges when interacting with direct-touch tabletop displays. This is particularly an issue when manipulating remote objects in 3D environments as finger presses can obscure objects at a distance that are rendered very small. Techniques to support remote manipulation either provide absolute mappings between finger presses and object transformation or rely on tools that support relative mappings t o selected objects. This paper explores techniques to manipulate remote 3D objects on direct-touch tabletops using absolute and relative mapping modes. A user study was conducted to compare absolute and relative mappings in support of a rotation task. Overall results did not show a statistically significant difference between these two mapping modes on both task completion time and the number of touches. However, the absolute mapping mode was found to be less efficient than the relative mapping mode when rotating a small object. Also participants preferred relative mapping for small objects. Four mapping techniques were then compared for perceived ease of use and learnability. Touchpad, voodoo doll and telescope techniques were found to be comparable for manipulating remote objects in a 3D scene. A flying camera technique was considered too complex and required increased effort by participants. Participants preferred an absolute mapping technique augmented to support small object manipulation, e.g. the voodoo doll technique

    Gsi demo: Multiuser gesture/speech interaction over digital tables by wrapping single user applications

    Get PDF
    Most commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want to allow multiple people to interact naturally with the tabletop application and with each other via rich speech and hand gesture and speech interaction on a digital table for geospatial applications- Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI Demo. First, GSI Demo creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration- instead of programming- to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying ¨Computer, when I do (one finger gesture), you do (mouse drag) ¨. Similarly, discrete speech commands can be trained by saying ¨Computer, when I say (layer bars), you do (keyboard and mouse macro) ¨. The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system
    corecore