7,960 research outputs found

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    An Inertial Device-based User Interaction with Occlusion-free Object Handling in a Handheld Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology used to merge virtual objects with real environments in real-time. In AR, the interaction which occurs between the end-user and the AR system has always been the frequently discussed topic. In addition, handheld AR is a new approach in which it delivers enriched 3D virtual objects when a user looks through the device’s video camera. One of the most accepted handheld devices nowadays is the smartphones which are equipped with powerful processors and cameras for capturing still images and video with a range of sensors capable of tracking location, orientation and motion of the user. These modern smartphones offer a sophisticated platform for implementing handheld AR applications. However, handheld display provides interface with the interaction metaphors which are developed with head-mounted display attached along and it might restrict with hardware which is inappropriate for handheld. Therefore, this paper will discuss a proposed real-time inertial device-based interaction technique for 3D object manipulation. It also explains the methods used such for selection, holding, translation and rotation. It aims to improve the limitation in 3D object manipulation when a user can hold the device with both hands without requiring the need to stretch out one hand to manipulate the 3D object. This paper will also recap of previous works in the field of AR and handheld AR. Finally, the paper provides the experimental results to offer new metaphors to manipulate the 3D objects using handheld devices

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work
    • …
    corecore