906 research outputs found

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    A Framework for Realistic 3D Tele-Immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identified and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems

    A Framework for Realistic 3D Tele-Immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite different from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experience of talking in person. Several causes for these differences have been identied and we propose inspiring and innovative solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational experience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic experiences to a multitude of users that for them will feel much more similar to having face to face meetings than the experience offered by conventional teleconferencing systems

    Reinventing a teleconferencing system

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2001.Includes bibliographical references (p. 67-71).In looking forward to more natural we can anticipate that the teleconferencing system of the future will enable participants at distant locations to share the same virtual space. The visual object of each participant can be transmitted to the other sites and be rendered from an individual perspective. This thesis presents an effort, X-Conference, to reinvent a teleconferencing system toward the concept of "3-D Virtual Teleconferencing." Several aspects are explored. A multiple-camera calibration approach is implemented and is employed to effectively blend the real view and the virtual view. An individualized 3-D head object is built semi-automatically by mapping the real texture to the globally modified generic model. Head motion parameters are extracted from tracking artificial and/or facial features. Without using the articulation model, facial animation is partially achieved by using texture displacement. UDP/IP multicast and TCP/IP unicast are both utilized to implement the networking scheme.by Xin Wang.S.M

    Recognizing Facial Expression using PCA and Genetic Algorithm

    Get PDF
    This paper presents an efficient method of recognition of facial expressions in a video. The works proposes highly efficient facial expression recognition system using PCA optimized by Genetic Algorithm .Reduced computational time and comparable efficiency in terms of its ability to recognize correctly are the benchmarks of this work. Video sequences contain more information than still images hence are in the research subject now-a-days and have much more activities during the expression actions. We use PCA, a statistical method to reduce the dimensionality and are used to extract features with the help of covariance analysis to generate Eigen –components of the images. The Eigen-components as a feature input is optimized by Genetic algorithm to reduce the computation cost

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    A mixed reality telepresence system for collaborative space operation

    Get PDF
    This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go. The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported

    Materialising contexts: virtual soundscapes for real-world exploration

    Get PDF
    © 2020, The Author(s). This article presents the results of a study based on a group of participants’ interactions with an experimental sound installation at the National Science and Media Museum in Bradford, UK. The installation used audio augmented reality to attach virtual sound sources to a vintage radio receiver from the museum’s collection, with a view to understanding the potentials of this technology for promoting exploration and engagement within museums and galleries. We employ a practice-based design ethnography, including a thematic analysis of our participants’ interactions with spatialised interactive audio, and present an identified sequence of interactional phases. We discuss how audio augmented artefacts can communicate and engage visitors beyond their traditional confines of line-of-sight, and how visitors can be drawn to engage further, beyond the realm of their original encounter. Finally, we provide evidence of how contextualised and embodied interactions, along with authentic audio reproduction, evoked personal memories associated with our museum artefact, and how this can promote interest in the acquisition of declarative knowledge. Additionally, through the adoption of a functional and theoretical aura-based model, we present ways in which this could be achieved, and, overall, we demonstrate a material object’s potential role as an interface for engaging users with, and contextualising, immaterial digital audio archival content

    Comparing Mixed Reality Agent Representations: Studies in the Lab and in the Wild

    Get PDF
    Mixed-reality systems provide a number of different ways of representing users to each other in collaborative scenarios. There is an obvious tension between using media such as video for remote users compared to representations as avatars. This paper includes two experiments (total n = 80) on user trust when exposed to two of three different user representations in an immersive virtual reality environment that also acts as a simulation of typical augmented reality simulations: full body video, head and shoulder video and an animated 3D model. These representations acted as advisors in a trivia quiz. By evaluating trust through advisor selection and self-report, we found only minor differences between representations, but a strong effect of perceived advisor expertise. Unlike prior work, we did not find the 3D model scored poorly on trust, perhaps as a result of greater congruence within an immersive context
    corecore