492 research outputs found

    Smart 360-Degree Photography for Enhancing Construction Progress Reporting

    Get PDF
    Periodical construction progress reports are essential in project evaluation and review. They impact stakeholder communication, transparency, and trust. While conventional pictures and videos (Captured Data) are currently the norm in supporting progress reporting, their use is not always efficient. As a result, 360-degree photography can now be integrated into progress reports using commercial products. However, there is a shortage of academic studies that actually assess the effectiveness of such tools. The goal of this research is to develop and test a user-friendly framework for progress reporting that integrates 360-degree photography. The research started by collecting information from construction experts to determine the used methods of progress reporting and the level of utilization of 360-degree photography in the MENA region. Then, an innovative framework that integrates 360-degree photography was developed. To evaluate the effectiveness of the developed framework, a 3-month pilot study was conducted where the developed framework was utilized in three ongoing construction projects in Egypt. After a thorough analysis of meeting, correspondence, and interview transcripts before, during, and after using the technology; the results indicate that the proper use of 360-degree photography in progress reports has a positive impact on the overall coordination, transparency, trust, and responsibility division between the project parties. The obstacles of utilizing such framework and recommendations on how to overcome them were also discussed so that future researchers can further improve the process of progress reporting

    Videoscapes: Exploring Unstructured Video Collections

    No full text

    Navigating Immersive and Interactive VR Environments With Connected 360° Panoramas

    Get PDF
    Emerging research is expanding the idea of using 360-degree spherical panoramas of real-world environments for use in 360 VR experiences beyond video and image viewing. However, most of these experiences are strictly guided, with few opportunities for interaction or exploration. There is a desire to develop experiences with cohesive virtual environments created with 360 VR that allow for choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. To tackle this gap, we designed RealNodes, a software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a pilot study showed that choice of UI had a significant effect on task completion times, showing one of our methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in average measures with preference, user engagement, and simulator-sickness. RealNodes, the UI designs, and the pilot study results contribute preliminary information that inspire future investigation of how to design effective explorable scenarios in 360 VR and visual guidance metaphors for navigation in applications using 360 VR environments

    Capture4VR: From VR Photography to VR Video

    Get PDF

    Exploring the impact of 360° movie cuts in users' attention

    Get PDF
    Virtual Reality (VR) has grown since the first devices for personal use became available on the market. However, the production of cinematographic content in this new medium is still in an early exploratory phase. The main reason is that cinematographic language in VR is still under development, and we still need to learn how to tell stories effectively. A key element in traditional film editing is the use of different cutting techniques, in order to transition seamlessly from one sequence to another. A fundamental aspect of these techniques is the placement and control over the camera. However, VR content creators do not have full control of the camera. Instead, users in VR can freely explore the 360° of the scene around them, which potentially leads to very different experiences. While this is desirable in certain applications such as VR games, it may hinder the experience in narrative VR. In this work, we perform a systematic analysis of users'' viewing behavior across cut boundaries while watching professionally edited, narrative 360° videos. We extend previous metrics for quantifying user behavior in order to support more complex and realistic footage, and we introduce two new metrics that allow us to measure users'' exploration in a variety of different complex scenarios. From this analysis, (i) we confirm that previous insights derived for simple content hold for professionally edited content, and (ii) we derive new insights that could potentially influence VR content creation, informing creators about the impact of different cuts in the audience's behavior

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360∘^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    A navigation paradigm driven classification for video-based rendering techniques

    Get PDF
    The use of videos as an input for a rendering process (video-based rendering, VBR) has recently been started to be looked upon with greater interest, and has added many other challenges and also solutions to classical image-based rendering (IBR). Although the general goal of VBR is shared by different applications, approaches widely differ regarding methodology, setup, and data representation. Previous attempts on classifying VBR techniques used external aspects as classification parameters, providing little insight on the inner similarities between works, and not defining clear lines of research. We found that the chosen navigation paradigm for a VBR application is ultimately the deciding factor on several details of a VBR technique. Based on this statement, this article presents the state of art on video-based rendering and its relations and dependencies to the used data representation and image processing techniques. We present a novel taxonomy for VBR applications with the navigation paradigm being the topmost classification attribute, and methodological aspects further down in the hierarchy. Different view generation methodologies, capture baselines and data representations found in the body of work are described, and their relation to the chosen classification scheme is discussed
    • …
    corecore