518 research outputs found

    Designing information feedback within hybrid physical/digital interactions

    Get PDF
    Whilst digital and physical interactions were once treated as separate design challenges, there is a growing need for them to be considered together to allow the creation of hybrid digital/physical experiences. For example, digital games can now include physical objects (with digital properties) or digital objects (with physical properties), both of which may be used to provide input, output, or in-game information in various combinations. In this paper we consider how users perceive and understand interactions that include physical/digital objects through the design of a novel game which allows us to consider: i) the character of the space/spaces in which we interact; ii) how users perceive their operation; and iii) how we can design such objects to extend the bandwidth of information we provide to the user/player. The prototype is used as the focus of a participatory design workshop in which players experimented with, and discussed physical ways of representing the virtual in-game information. The results have been used to provide a framing for designers approaching information feedback in this domain, and highlight the requirement for further design research

    SurfaceCast: Ubiquitous, Cross-Device Surface Sharing

    Get PDF
    Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction

    Supporting Collaborative Learning in Computer-Enhanced Environments

    Full text link
    As computers have expanded into almost every aspect of our lives, the ever-present graphical user interface (GUI) has begun facing its limitations. Demanding its own share of attention, GUIs move some of the users\u27 focus away from the task, particularly when the task is 3D in nature or requires collaboration. Researchers are therefore exploring other means of human-computer interaction. Individually, some of these new techniques show promise, but it is the combination of multiple approaches into larger systems that will allow us to more fully replicate our natural behavior within a computing environment. As computers become more capable of understanding our varied natural behavior (speech, gesture, etc.), the less we need to adjust our behavior to conform to computers\u27 requirements. Such capabilities are particularly useful where children are involved, and make using computers in education all the more appealing. Herein are described two approaches and implementations of educational computer systems that work not by user manipulation of virtual objects, but rather, by user manipulation of physical objects within their environment. These systems demonstrate how new technologies can promote collaborative learning among students, thereby enhancing both the students\u27 knowledge and their ability to work together to achieve even greater learning. With these systems, the horizon of computer-facilitated collaborative learning has been expanded. Included among this expansion is identification of issues for general and special education students, and applications in a variety of domains, which have been suggested

    Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

    Get PDF
    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374

    tCAD: a 3D modeling application on a depth enhanced tabletop computer

    Get PDF
    Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.Universidade da Madeir

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    MVC-3D: Adaptive Design Pattern for Virtual and Augmented Reality Systems

    Get PDF
    International audienceIn this paper, we present MVC-3D design pattern to develop virtual and augmented (or mixed) reality interfaces that use new types of sensors, modalities and implement specific algorithms and simulation models. The proposed pattern represents the extension of classic MVC pattern by enriching the View component (interactive View) and adding a specific component (Library). The results obtained on the development of augmented reality interfaces showed that the complexity of M, iV and C components is reduced. The complexity increases only on the Library component (L). This helps the programmers to well structure their models even if the interface complexity increases. The proposed design pattern is also used in a design process called MVC-3D in the loop that enables a seamless evolution from initial prototype to the final system

    inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

    Get PDF
    Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Swedish Research Council (Fellowship)Blanceflor Foundation (Scholarship
    • …
    corecore