902 research outputs found

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

    Get PDF
    Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Swedish Research Council (Fellowship)Blanceflor Foundation (Scholarship

    Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display

    Get PDF
    Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output

    Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI

    Full text link
    This paper introduces Sketched Reality, an approach that combines AR sketching and actuated tangible user interfaces (TUI) for bidirectional sketching interaction. Bi-directional sketching enables virtual sketches and physical objects to "affect" each other through physical actuation and digital computation. In the existing AR sketching, the relationship between virtual and physical worlds is only one-directional -- while physical interaction can affect virtual sketches, virtual sketches have no return effect on the physical objects or environment. In contrast, bi-directional sketching interaction allows the seamless coupling between sketches and actuated TUIs. In this paper, we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching tool to demonstrate the concept. In our system, virtual sketches drawn and simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move, actuate, collide, and constrain physical Toio robots, as if virtual sketches and the physical objects exist in the same space through seamless coupling between AR and robot motion. This paper contributes a set of novel interactions and a design space of bi-directional AR sketching. We demonstrate a series of potential applications, such as tangible physics education, explorable mechanism, tangible gaming for children, and in-situ robot programming via sketching.Comment: UIST 202

    Mechanical constraints as common ground between people and computers

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 143-149).This thesis presents a new type of human-computer interface based on mechanical constraints that combines some of the tactile feedback and affordances of mechanical systems with the abstract computational power of modern computers. The interface is based on a tabletop interaction surface that can sense and move small objects on top of it. Computation is merged with dynamic physical processes on the tabletop that are exposed to and modified by the user in order to accomplish his or her task. The system places mechanical constraints and mathematical constraints on the same level, allowing users to guide simulations and optimization processes by constraining the motion of physical objects on the interaction surface. The interface provides ample opportunities for improvisation by allowing the user to employ a rich variety of everyday physical objects as interface elements. Subjects in an evaluation were more effective at solving a complex spatial layout problem using this system than with either of two alternative interfaces that did not feature actuation.James McMichael Patten.Ph.D

    Mid-Air tangible interaction enabled by computer controlled magnetic levitation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 83-86).This thesis presents a concept of mid-air tangible interaction and a system called ZeroN that was developed to enable this interaction. Through this research, I extend the tabletop tangible interaction modalities which have been confined to 2D surfaces into 3D space above the surface. Users are invited to place and move a levitated object in the mid-air space, which is analogous to placing objects on 2D surfaces. For example, users can place a physical object that represents the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. To achieve these interaction scenarios, we developed ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. In doing so, ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. Our technological development includes a magnetic and mechanical control system that can levitate and actuate a permanent magnet in 3D space. This is combined with an optical tracking and display system that projects images on the levitating object. In this thesis, I present interaction techniques and applications developed in the context of this system. Finally, I discuss initial observations and implications, and outline future development and challenges.by Jinha Lee.S.M

    TangiBoard: a toolkit to reduce the implementation burden of tangible user interfaces in education

    Get PDF
    The use of Tangible User Interfaces (TUI) as an educational technology has gained sustained interest over the years with common agreement on its innate ability to engage and intrigue students in active-learning pedagogies. Whilst encouraging results have been obtained in research, the widespread adoption of TUI architectures is still hindered by a myriad of implementation burdens imposed by current toolkits. To this end, this paper presents an innovative TUI toolkit: TangiBoard, which enables the deployment of an interactive TUI system using low-cost, and presently available educational technology. Apart from curtailing setup costs and technical expertise required for adopting TUI systems, the toolkit provides an application framework to facilitate system calibration and development integration with GUI applications. This is enabled by a robust computer vision application that tracks a contributed passive marker set providing a range of tangible interactions to TUI frameworks. The effectiveness of this toolkit was evaluated by computer systems developers with respect to alternate toolkits for TUI design. Open-source versions of the TangiBoard toolkit together with marker sets are provided online through research licens

    ShapeBots: Shape-changing Swarm Robots

    Full text link
    We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.Comment: UIST 201
    corecore