8 research outputs found

    Content Format and Quality of Experience in Virtual Reality

    Get PDF
    In this paper, we investigate three forms of virtual reality content production and consumption. Namely, 360 stereoscopic video, the combination of a 3D environment with a video billboard for dynamic elements, and a full 3D rendered scene. On one hand, video based techniques facilitate the acquisition of content, but they can limit the experience of the user since the content is captured from a fixed point of view. On the other hand, 3D content allows for point of view translation, but real-time photorealistic rendering is not trivial and comes at high production and processing costs. We also compare the two extremes with an approach that combines dynamic video elements with a 3D virtual environment. We discuss the advantages and disadvantages of these systems, and present the result of a user study with 24 participants. In the study, we evaluated the quality of experience, including presence, simulation sickness and participants' assessment of content quality, of three versions of a cinematic segment with two actors. We found that, in this context, mixing video and 3D content produced the best experience.Comment: 25 page

    Omnidirectional camera pose estimation and projective texture mapping for photorealistic 3D virtual reality experiences

    Get PDF
    Modern applications in virtual reality require a high level of fruition of the environment as if it was real. In applications that have to deal with real scenarios, it is important to acquire both its three-dimensional (3D) structure and details to enable the users to achieve good immersive experiences. The purpose of this paper is to illustrate a method to obtain a mesh with high quality texture combining a raw 3D mesh model of the environment and 360 ° images. The main outcome is a mesh with a high level of photorealistic details. This enables both a good depth perception thanks to the mesh model and high visualization quality thanks to the 2D resolution of modern omnidirectional cameras. The fundamental step to reach this goal is the correct alignment between the 360 ° camera and the 3D mesh model. For this reason, we propose a method that embodies two steps: 1) find the 360 ° cameras pose within the current 3D environment; 2) project the high-quality 360 ° image on top of the mesh. After the method description, we outline its validation in two virtual reality scenarios, a mine and city environment, respectively, which allows us to compare the achieved results with the ground truth.</p

    Enhanced life-size holographic telepresence framework with real-time three-dimensional reconstruction for dynamic scene

    Get PDF
    Three-dimensional (3D) reconstruction has the ability to capture and reproduce 3D representation of a real object or scene. 3D telepresence allows the user to feel the presence of remote user that was remotely transferred in a digital representation. Holographic display is one of alternatives to discard wearable hardware restriction, it utilizes light diffraction to display 3D images to the viewers. However, to capture a real-time life-size or a full-body human is still challenging since it involves a dynamic scene. The remaining issue arises when dynamic object to be reconstructed is always moving and changes shapes and required multiple capturing views. The life-size data captured were multiplied exponentially when working with more depth cameras, it can cause the high computation time especially involving dynamic scene. To transfer high volume 3D images over network in real-time can also cause lag and latency issue. Hence, the aim of this research is to enhance life-size holographic telepresence framework with real-time 3D reconstruction for dynamic scene. There are three stages have been carried out, in the first stage the real-time 3D reconstruction with the Marching Square algorithm is combined during data acquisition of dynamic scenes captured by life-size setup of multiple Red Green Blue-Depth (RGB-D) cameras. Second stage is to transmit the data that was acquired from multiple RGB-D cameras in real-time and perform double compression for the life-size holographic telepresence. The third stage is to evaluate the life-size holographic telepresence framework that has been integrated with the real-time 3D reconstruction of dynamic scenes. The findings show that by enhancing life-size holographic telepresence framework with real-time 3D reconstruction, it has reduced the computation time and improved the 3D representation of remote user in dynamic scene. By running the double compression for the life-size holographic telepresence, 3D representations in life-size is smooth. It has proven can minimize the delay or latency during acquired frames synchronization in remote communications

    Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR

    Full text link
    Recent technological advances are enabling HCI researchers to explore interaction possibilities for remote XR collaboration using high-fidelity reconstructions of physical activity spaces. However, creating these reconstructions often lacks user involvement with an overt focus on capturing sensory context that does not necessarily augment an informal social experience. This work seeks to understand social context that can be important for reconstruction to enable XR applications for informal instructional scenarios. Our study involved the evaluation of an XR remote guidance prototype by 8 intergenerational groups of closely related gardeners using reconstructions of personally meaningful spaces in their gardens. Our findings contextualize physical objects and areas with various motivations related to gardening and detail perceptions of XR that might affect the use of reconstructions for remote interaction. We discuss implications for user involvement to create reconstructions that better translate real-world experience, encourage reflection, incorporate privacy considerations, and preserve shared experiences with XR as a medium for informal intergenerational activities.Comment: 26 pages, 5 figures, 4 table

    Bonding Over Distances: Building Social Presence Using Mixed Reality for Transnational Families

    Get PDF
    Sparked by the frustrations experienced in transnational family communication and inspired by an interest in exploring the potentials of a mixed reality (MR) future landscape, this study investigates the primary research question: how can we use mixed reality to build social presence for transnational family communication? This study reviews literature and contextual works from relevant fields, including presence and social presence, mixed reality, transnational relationships (inter-family and human-space relationships), and technology for social presence for transnational families. Then, the researcher situates this study at the intersection of the before mentioned categories. Utilizing the Research through Design methodology and paired user testing methods, this study describes 4 iterative MR prototypes for building social presence for transnational families, highlighting each prototype’s relation to a secondary research question, exploration goals, features, performance evaluation, and takeaways for the next iteration. Then, it documents and analyzes data collected from in-depth user testing sessions with 6 transnational family pairs totaling 12 participants, each with one member living locally (in Toronto), and the other overseas. The quantitative and qualitative data were collected from different components of the user testing, including observation notes from paired-up live connection sessions for collaborative tasks, interviews, and online surveys. This study contributes to theory at the overlapping fields of social presence, mixed reality research, transnational family relationship, and human-space relationship. The mixed reality prototypes, design frameworks, and evaluation criteria for designing mixed reality spaces to build social presence for transnational families also provide significance to design practice

    Conceitos e métodos para apoio ao desenvolvimento e avaliação de colaboração remota utilizando realidade aumentada

    Get PDF
    Remote Collaboration using Augmented Reality (AR) shows great potential to establish a common ground in physically distributed scenarios where team-members need to achieve a shared goal. However, most research efforts in this field have been devoted to experiment with the enabling technology and propose methods to support its development. As the field evolves, evaluation and characterization of the collaborative process become an essential, but difficult endeavor, to better understand the contributions of AR. In this thesis, we conducted a critical analysis to identify the main limitations and opportunities of the field, while situating its maturity and proposing a roadmap of important research actions. Next, a human-centered design methodology was adopted, involving industrial partners to probe how AR could support their needs during remote maintenance. These outcomes were combined with literature methods into an AR-prototype and its evaluation was performed through a user study. From this, it became clear the necessity to perform a deep reflection in order to better understand the dimensions that influence and must/should be considered in Collaborative AR. Hence, a conceptual model and a humancentered taxonomy were proposed to foster systematization of perspectives. Based on the model proposed, an evaluation framework for contextualized data gathering and analysis was developed, allowing support the design and performance of distributed evaluations in a more informed and complete manner. To instantiate this vision, the CAPTURE toolkit was created, providing an additional perspective based on selected dimensions of collaboration and pre-defined measurements to obtain “in situ” data about them, which can be analyzed using an integrated visualization dashboard. The toolkit successfully supported evaluations of several team-members during tasks of remote maintenance mediated by AR. Thus, showing its versatility and potential in eliciting a comprehensive characterization of the added value of AR in real-life situations, establishing itself as a generalpurpose solution, potentially applicable to a wider range of collaborative scenarios.Colaboração Remota utilizando Realidade Aumentada (RA) apresenta um enorme potencial para estabelecer um entendimento comum em cenários onde membros de uma equipa fisicamente distribuídos precisam de atingir um objetivo comum. No entanto, a maioria dos esforços de investigação tem-se focado nos aspetos tecnológicos, em fazer experiências e propor métodos para apoiar seu desenvolvimento. À medida que a área evolui, a avaliação e caracterização do processo colaborativo tornam-se um esforço essencial, mas difícil, para compreender as contribuições da RA. Nesta dissertação, realizámos uma análise crítica para identificar as principais limitações e oportunidades da área, ao mesmo tempo em que situámos a sua maturidade e propomos um mapa com direções de investigação importantes. De seguida, foi adotada uma metodologia de Design Centrado no Humano, envolvendo parceiros industriais de forma a compreender como a RA poderia responder às suas necessidades em manutenção remota. Estes resultados foram combinados com métodos da literatura num protótipo de RA e a sua avaliação foi realizada com um caso de estudo. Ficou então clara a necessidade de realizar uma reflexão profunda para melhor compreender as dimensões que influenciam e devem ser consideradas na RA Colaborativa. Foram então propostos um modelo conceptual e uma taxonomia centrada no ser humano para promover a sistematização de perspetivas. Com base no modelo proposto, foi desenvolvido um framework de avaliação para recolha e análise de dados contextualizados, permitindo apoiar o desenho e a realização de avaliações distribuídas de forma mais informada e completa. Para instanciar esta visão, o CAPTURE toolkit foi criado, fornecendo uma perspetiva adicional com base em dimensões de colaboração e medidas predefinidas para obter dados in situ, que podem ser analisados utilizando o painel de visualização integrado. O toolkit permitiu avaliar com sucesso vários colaboradores durante a realização de tarefas de manutenção remota apoiada por RA, permitindo mostrar a sua versatilidade e potencial em obter uma caracterização abrangente do valor acrescentado da RA em situações da vida real. Sendo assim, estabelece-se como uma solução genérica, potencialmente aplicável a uma gama diversificada de cenários colaborativos.Programa Doutoral em Engenharia Informátic

    Removing spatial boundaries in immersive mobile communications

    Get PDF
    Despite a worldwide trend towards mobile computing, current telepresence experiences focus on stationary desktop computers, limiting how, when, and where researched solutions can be used. In this thesis I demonstrate that mobile phones are a capable platform for future research, showing the effectiveness of the communications possible through their inherent portability and ubiquity. I first describe a framework upon which future systems can be built, which allows two distant users to explore one of several panoramic representations of the local environment by reorienting their device. User experiments demonstrate this framework's ability to induce a sense of presence within the space and between users, and show that capturing this environment live provides no significant benefits over constructing it incrementally. This discovery enables a second application that allows users to explore a three-dimensional representation of their environment. Each user's position is shown as an avatar, with live facial capture to facilitate natural communication. Either may also see the full environment by occupying the same virtual space. This application is also evaluated and shown to provide efficient communications to its users, providing a novel untethered experience not possible on stationary hardware despite the inherent lack of computational ability available on mobile devices
    corecore