2,147 research outputs found
Projector-Based Augmentation
Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces
Recommended from our members
Proxemic Flow: Dynamic Peripheral Floor Visualizations for Revealing and Mediating Large Surface Interactions
Interactive large surfaces have recently become commonplace for interactions in public settings. The fact that people can engage with them and the spectrum of possible interactions, however, often remain invisible and can be confusing or ambiguous to passersby. In this paper, we explore the design of dynamic peripheral floor visualizations for revealing and mediating large surface interactions. Extending earlier work on interactive illuminated floors, we introduce a novel approach for leveraging floor displays in a secondary, assisting role to aid users in interacting with the primary display. We illustrate a series of visualizations with the illuminated floor of the Proxemic Flow system. In particular, we contribute a design space for peripheral floor visualizations that (a) provides peripheral information about tracking fidelity with personal halos, (b) makes interaction zones and borders explicit for easy opt-in and opt-out, and (c) gives cues inviting for spatial movement or possible next interaction steps through wave, trail, and footstep animations. We demonstrate our proposed techniques in the context of a large surface application and discuss important design considerations for assistive floor visualizations
Low-cost efficient interactive whiteboard
In this paper we present a low-cost efficient Interactive Whiteboard that, by fusing depth and video information provided by a low-cost depth camera, is able to detect and track user movements
deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch
We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications
KAVE - Kinect Cave: design, tools and comparative analysis with other VR technologies
Virtual reality has been delivered through many different forms and iterations. One of them is the CAVE. CAVE systems have developed over the yearsbuttheyarestillhaveprohibitivecostsandarerathercomplextoimplement. In this thesis we propose our own low-cost CAVE system - comprised of details about the setup as well as a calibration software that was developedtohelpachievethegoalsofthisthesis-andcompareittootherlost-cost CAVEsfoundintheliterature. Thisthesisalsoencompassesapresencestudy that was performed as a result of assessing the resulting CAVE. This study compared CAVE, PC and Head-Mounted Display in terms of presence and workloadthroughtheuseofvalidatedquestionnairesfoundintheliterature. The resulting data showed HMD induced higher sense of presence than the CAVE, and CAVE induced higher sense of presence than the PC. Regarding workloadofthesystem,thedataalsoshowednostatisticallymeaningfuldifferences between the three technologies except for the physical demand of performing a task in a CAVE compared to performing the same task in the PC
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the reality–virtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to “digitize” the performer’s movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performer’s behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
- …