304 research outputs found
A mixed reality telepresence system for collaborative space operation
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go.
The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported
Situated Displays in Telecommunication
In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems
Video based reconstruction system for mixed reality environments supporting contextualised non-verbal communication and its study
This Thesis presents a system to capture, reconstruct and render the three-dimensional form of people and objects of interest in such detail that the spatial and visual aspects of non-verbal behaviour can be communicated.The system supports live distribution and simultaneous rendering in multiple locations enabling the apparent teleportation of people and objects. Additionally, the system allows for the recording of live sessions and their playback in natural time with free-viewpoint.It utilises components of a video based reconstruction and a distributed video implementation to create an end-to-end system that can operate in real-time and on commodity hardware.The research addresses the specific challenges of spatial and colour calibration, segmentation and overall system architecture to overcome technical barriers, the requirement of domain specific knowledge to setup and generate avatars to a consistent high quality.Applications of the system include, but are not limited to, telepresence, where the computer generated avatars used in Immersive Collaborative Virtual Environments can be replaced with ones that are faithful of the people they represent and supporting researchers in their study of human communication such as gaze, inter-personal distance and facial expression.The system has been adopted in other research projects and is integrated with a mixed reality application where, during a live linkup, a three-dimensional avatar is streamed to multiple end-points across different countries
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the realityâvirtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to âdigitizeâ the performerâs movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performerâs behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
A technical account behind the development of a reproducible low-cost immersive space to conduct applied user testing
Both laboratory and field experiments are flawed in their appropriateness for Human-centered design (HCD) user testing. Simulated Task Environments (STEs) offer a viable alternative, enabling researchers to recreate realistic conditions and immersive environments whilst controlling variables under laboratory conditions. This paper details the design process and technicalities used by a multi-disciplinary HCD research team to develop a reproducible low-cost immersive STE called the Perceptual Experience Laboratory (PEL). The research and development of the PEL in its three distinct stages is outlined to share the lessons learnt for the benefit of researchers and practitioners. In its current form, cylindrical media is surface-mapped on a bespoke 2m-high, 200° video wall to deliver seamless 12K enhanced field-of-view content around the user to visually recreate environments not normally accessible to researchers. The staging area can be configured with props and multisensory cues, simulating an in-context approach for HCD product testing. Additionally, immersive and realistic soundscapes are created via a 20.4 audio system equipped with spatial panners which provide directional sound. A growing number of commercial and academic research projects have been delivered using the PEL with research validating the user testing environment and its ongoing success attracting research and enterprise capital investments to advance immersive capabilities
Recommended from our members
Shadows, touch and digital puppeteering: a media archaeological approach
Aims
The practical aim of this research project is to create a multi-touch digital puppetry system that simulates shadow theatre environments and translates gestural acts of touch into live and expressive control of virtual shadow figures. The research is focussed on the qualities of movement achievable through the haptics of single and multi-touch control of the digital puppets in the simulation. An associated aim is to create a collaborative environment where multiple performers can control dynamic animation and scenography, and create novel visualisations and narratives.
The conceptual aim is to link traditional and new forms of puppetry seeking cultural significance in the âremediationâ of old forms that avail themselves of new haptic resources and collaborative interfaces.
The thesis evaluates related prior art where traditional worlds of shadow performance meet new media, digital projection and 3D simulation, in order to investigate how changing technical contexts transform the potential of shadows as an expressive medium.
Methodology
The thesis uses cultural analysis of relevant documentary material to contextualise the practical work by relating the media archaeology of 2D puppetryâshadows, shadowgraphs and silhouettesâto landmark work in real-time computer graphics and performance animation. The survey considers the work of puppeteers, animators, computer graphics specialists and media artists.
Through practice and an experimental approach to critical digital creativity, the study provides practical evidence of multiple iterations of controllable physics-based animation delivering expressive puppet motion through touch and multiuser interaction. Video sequences of puppet movement and written observational analysis document the intangible aspects of animation in performance. Through re-animation of archival shadow puppets, the study presents an emerging artistic media archaeological method. The major element of this method has been the restoration of a collection of Turkish Karagöz Shadow puppets from the Institut International de la Marionnette (Charleville, France) into a playable digital form.
Results
The thesis presents a developing creative and analytical framework for digital shadow puppetry. It proposes a media archaeological method for working creatively with puppet archives that unlock the kinetic and expressive potential of restored figures. The interaction design introduces novel approaches to puppetry control systemsâusing spring networksâwith objects under physics-simulation that demonstrate emergent expressive qualities. The system facilitates a dance of agencyÂč between puppeteer and digital instrument. The practical elements have produced several software iterations and a tool-kit for generating elegant, nuanced multi-touch shadow puppetry. The study presents accidental discoveriesâserendipitous benefits of open-ended practical exploration. For instance: the extensible nature of the control system means novel inputâother than touchâcan provide exciting potential for accessible user interaction, e.g. with gaze duration and eye direction. The study also identifies limitations including the rate of software change and obsolescence, the scope of physics-based animation and failures of simulation.
Originality/value
The work has historical value in that it documents and begins a media archaeology of digital puppetry, an animated phenomenon of increasing academic and commercial interest. The work is of artistic value providing an interactive approach to making digital performance from archival material in the domain of shadow theatre. The work contributes to the electronic heritage of existing puppetry collections.
The study establishes a survey of digital puppetry, setting a research agenda for future studies. Work may proceed to digitise, rig and create collaborative and web-mediated touch-based motion control systems for 2D and 3D puppets. The present study thus provides a solid platform to restore past performances and create new work from old, near forgotten-forms.
Âč Following Andrew Pickering, puppetry is âa temporally extended back-and-forth dance of human and non-human agency in which activity and passivity on both sides are reciprocally intertwinedâ PICKERING, A. 2010. Material Culture and the Dance of Agency. In: BEAUDRY, M. C. & HICKS, D. (eds.) Oxford Handbook of Material Culture Studies. Oxford University Press.
Proceedings of the 2nd European conference on disability, virtual reality and associated technologies (ECDVRAT 1998)
The proceedings of the conferenc
- âŠ