816 research outputs found

    Messy Tabletops: Clearing Up The Occlusion Problem

    Get PDF
    When introducing interactive tabletops into the home and office, lack of space will often mean that these devices play two roles: interactive display and a place for putting things. Clutter on the table surface may occlude information on the display, preventing the user from noticing it or interacting with it. We present a technique for dealing with clutter on tabletops which finds a suitable unoccluded area of the display in which to show content. We discuss the implementation of this technique and some design issues which arose during implementation

    Integrating 2D Mouse Emulation with 3D Manipulation for Visualizations on a Multi-Touch Table

    Get PDF
    We present the Rizzo, a multi-touch virtual mouse that has been designed to provide the fine grained interaction for information visualization on a multi-touch table. Our solution enables touch interaction for existing mouse-based visualizations. Previously, this transition to a multi-touch environment was difficult because the mouse emulation of touch surfaces is often insufficient to provide full information visualization functionality. We present a unified design, combining many Rizzos that have been designed not only to provide mouse capabilities but also to act as zoomable lenses that make precise information access feasible. The Rizzos and the information visualizations all exist within a touch-enabled 3D window management system. Our approach permits touch interaction with both the 3D windowing environment as well as with the contents of the individual windows contained therein. We describe an implementation of our technique that augments the VisLink 3D visualization environment to demonstrate how to enable multi-touch capabilities on all visualizations written with the popular prefuse visualization toolkit.

    Hand Occlusion on a Multi-Touch Tabletop

    Get PDF
    International audienceWe examine the shape of hand and forearm occlusion on a multi-touch table for different touch contact types and tasks. Individuals have characteristic occlusion shapes, but with commonalities across tasks, postures, and handedness. Based on this, we create templates for designers to justify occlusion-related decisions and we propose geometric models capturing the shape of occlusion. A model using diffused illumination captures performed well when augmented with a forearm rectangle, as did a modified circle and rectangle model with ellipse "fingers" suitable when only X-Y contact positions are available. Finally, we describe the corpus of detailed multi-touch input data we generated which is available to the community

    The effects of tool container location on user performance in graphical user interfaces

    Get PDF
    A common way of organizing Windows, Icons, Menus, and Pointers (WIMP) interfaces is to group tools into tool containers, providing one visual representation. Common tool containers include toolbars and menus, as well as more complex tool containers, like Microsoft Office’s Ribbon, Toolglasses, and marking menus. The location of tool containers has been studied extensively in the past using Fitts’s Law, which governs selection time; however, selection time is only one aspect of user performance. In this thesis, I show that tool container location affects other aspects of user performance, specifically attention and awareness. The problem investigated in this thesis is that designers lack an understanding of the effects of tool container location on two important user performance factors: attention and group awareness. My solution is to provide an initial understanding of the effects of tool container location on these factors. In solving this problem, I developed a taxonomy of tool container location, and carried out two research studies. The two research studies investigated tool container location in two contexts: single-user performance with desktop interfaces, and group performance in tabletop interfaces. Through the two studies, I was able to show that tool container location does affect attention and group awareness, and to provide new recommendations for interface designers

    Pen and paper techniques for physical customisation of tabletop interfaces

    Get PDF

    Poking fun at the surface: exploring touch-point overloading on the multi-touch tabletop with child users

    Get PDF
    In this paper a collaborative game for children is used to explore touch-point overloading on a multi-touch tabletop. Understanding the occurrence of new interactional limitations, such as the situation of touch-point overloading in a multi-touch interface, is highly relevant for interaction designers working with emerging technologies. The game was designed for the Microsoft Surface 1.0 and during gameplay the number of simultaneous touch-points required gradually increases to beyond the physical capacity of the users. Studies were carried out involving a total of 42 children (from 2 different age groups) playing in groups of between 5-7 and all interactions were logged. From quantitative analysis of the interactions occurring during the game and observations made we explore the impact of overloading and identify other salient findings. This paper also highlights the need for empirical evaluation of the physical and cognitive limitations of interaction with emerging technologies

    Group reaching over digital tabletops with digital arm embodiments

    Get PDF
    In almost all collaborative tabletop tasks, groups require coordinated access to the shared objects on the table’s surface. The physical social norms of close-proximity interactions built up over years of interacting around other physical bodies cause people to avoid interfering with other people (e.g., avoiding grabbing the same object simultaneously). However, some digital tabletop situations require the use of indirect input (e.g., when using mice, and when supporting remote users). With indirect input, people are no longer physically embodied during their reaching gestures, so most systems provide digital embodiments – visual representations of each person – to provide feedback to both the person who is reaching and to the other group members. Tabletop arm embodiments have been shown to better support group interactions than simple visual designs, providing awareness of actions to the group. However, researchers and digital tabletop designers know little of how the design of digital arm embodiments affects the fundamental group tabletop interaction of reaching for objects. Therefore, in this thesis, we evaluate how people coordinate their interactions over digital tabletops when using different types of embodiments. Specifically, in a series of studies, we investigate how the visual design (what they look like) and interaction design (how they work) of digital arm embodiments affects a group’s coordinative behaviours in an open- ended parallel tabletop task. We evaluated visual factors of size, transparency, and realism (through pictures and videos of physical arms), as well as interaction factors of input and augmentations (feedback of interactions), in both a co-located and distributed environment. We found that the visual design had little effect on a group’s ability to coordinate access to shared tabletop items, that embodiment augmentations are useful to support group coordinative actions, and that there are large differences when the person is not physically co-present. Our results demonstrate an initial exploration into the design of digital arm embodiments, providing design guidelines for future researchers and designers to use when designing the next generation of shared digital spaces

    SynTable: A Synthetic Data Generation Pipeline for Unseen Object Amodal Instance Segmentation of Cluttered Tabletop Scenes

    Full text link
    In this work, we present SynTable, a unified and flexible Python-based dataset generator built using NVIDIA's Isaac Sim Replicator Composer for generating high-quality synthetic datasets for unseen object amodal instance segmentation of cluttered tabletop scenes. Our dataset generation tool can render a complex 3D scene containing object meshes, materials, textures, lighting, and backgrounds. Metadata, such as modal and amodal instance segmentation masks, occlusion masks, depth maps, bounding boxes, and material properties, can be generated to automatically annotate the scene according to the users' requirements. Our tool eliminates the need for manual labeling in the dataset generation process while ensuring the quality and accuracy of the dataset. In this work, we discuss our design goals, framework architecture, and the performance of our tool. We demonstrate the use of a sample dataset generated using SynTable by ray tracing for training a state-of-the-art model, UOAIS-Net. The results show significantly improved performance in Sim-to-Real transfer when evaluated on the OSD-Amodal dataset. We offer this tool as an open-source, easy-to-use, photorealistic dataset generator for advancing research in deep learning and synthetic data generation.Comment: Version

    Using Haar-like feature classifiers for hand tracking in tabletop augmented reality

    Get PDF
    We propose in this paper a hand interaction approach to Augmented Reality Tabletop applications. We detect the user’s hands using haar-like feature classifiers and correlate its positions with the fixed markers on the table. This gives the user the possibility to move, rotate and resize the virtual objects located over the table with their bare hands.Postprint (published version

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010
    corecore