14 research outputs found

    Data compression and transmission aspects of panoramic videos

    Get PDF
    Panoramic videos are effective means for representing static or dynamic scenes along predefined paths. They allow users to change their viewpoints interactively at points in time or space defined by the paths. High-resolution panoramic videos, while desirable, consume a significant amount of storage and bandwidth for transmission. They also make real-time decoding computationally very intensive. This paper proposes efficient data compression and transmission techniques for panoramic videos. A high-performance MPEG-2-like compression algorithm, which takes into account the random access requirements and the redundancies of panoramic videos, is proposed. The transmission aspects of panoramic videos over cable networks, local area networks (LANs), and the Internet are also discussed. In particular, an efficient advanced delivery sharing scheme (ADSS) for reducing repeated transmission and retrieval of frequently requested video segments is introduced. This protocol was verified by constructing an experimental VOD system consisting of a video server and eight Pentium 4 computers. Using the synthetic panoramic video Village at a rate of 197 kb/s and 7 f/s, nearly two-thirds of the memory access and transmission bandwidth of the video server were saved under normal network traffic.published_or_final_versio

    Simulation of High-Visual Quality Scenes in Low-Cost Virtual Reality

    Get PDF
    With the increasing popularity of virtual reality, many video games and virtual experiences with high-visual quality have been developed recently. Virtual reality with a high-quality representation of scenes is still an experience linked to high-cost devices. There are currently low-cost virtual reality solutions by using mobile devices, but in those cases, the visual quality of the presented virtual environments must be simplified for running on mobile devices with limited hardware characteristics. In this work, we present a novel Image-Based Rendering technique for low-cost virtual reality. We have conducted a performance evaluation of three mobile devices with different hardware characteristics. Results show that our technique represents high-visual quality virtual environments with considerably better performance compared to traditional rendering solutions.Workshop: WCGIV – Computación Gráfica, Imágenes y VisualizaciónRed de Universidades con Carreras en Informátic

    Spherical Image Processing for Immersive Visualisation and View Generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated

    An investigation into web-based panoramic video virtual reality with reference to the virtual zoo.

    Get PDF
    Panoramic image Virtual Reality (VR) is a 360 degree image which has been interpreted as a kind of VR that allows users to navigate, view, hear and have remote access to a virtual environment. Panoramic Video VR builds on this, where filming is done in the real world to create a highly dynamic and immersive environment. This is proving to be a very attractive technology and has introduced many possible applications but still present a number of challenges, considered in this research. An initial literature survey identified limitations in panoramic video to date: these were the technology (e.g. filming and stitching) and the design of effective navigation methods. In particular, there is a tendency for users to become disoriented during way-finding. In addition, an effective interface design to embed contextual information is required. The research identified the need to have a controllable test environment in order to evaluate the production of the video and the optimal way of presenting and navigating within the scene. Computer Graphics (CG) simulation scenes were developed to establish a method of capturing, editing and stitching the video under controlled conditions. In addition, a novel navigation method, named the “image channel” was proposed and integrated within this environment. This replaced hotspots: the traditional navigational jumps between locations. Initial user testing indicated that the production was appropriate and did significantly improve user perception of position and orientation over jump-based navigation. The interface design combined with the environment view alone was sufficient for users to understand their location without the need to augment the view with an on screen map. After obtaining optimal methods in building and improving the technology, the research looked for a natural, complex, and dynamic real environment for testing. The web-based virtual zoo (World Association of Zoos and Aquariums) was selected as an ideal production: It had the purpose to allow people to get close to animals in their natural habitat and created particular interest to develop a system for knowledge delivery, raising protection concerns, and entertaining visitors: all key roles of a zoo. The design method established from CG was then used to develop a film rig and production unit for filming a real animal habitat: the Formosan rock monkey in Taiwan. A web-based panoramic video of this was built and tested though user experience testing and expert interviews. The results of this were essentially identical to the testing done in the prototype environment, and validated the production. Also was successfully attracting users to the site repeatedly. The research has contributed to new knowledge in improvement to the production process, improvement to presentation and navigating within panoramic videos through the proposed Image Channel method, and has demonstrated that web-based virtual zoo can be improved to help address considerable pressure on animal extinction and animal habitat degradation that affect humans by using this technology. Further studies were addressed. The research was sponsored by Taiwan’s Government and Twycross Zoo UK was a collaborator

    Spherical image processing for immersive visualisation and view generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    3D panoramic imaging for virtual environment construction

    Get PDF
    The project is concerned with the development of algorithms for the creation of photo-realistic 3D virtual environments, overcoming problems in mosaicing, colour and lighting changes, correspondence search speed and correspondence errors due to lack of surface texture. A number of related new algorithms have been investigated for image stitching, content based colour correction and efficient 3D surface reconstruction. All of the investigations were undertaken by using multiple views from normal digital cameras, web cameras and a ”one-shot” panoramic system. In the process of 3D reconstruction a new interest points based mosaicing method, a new interest points based colour correction method, a new hybrid feature and area based correspondence constraint and a new structured light based 3D reconstruction method have been investigated. The major contributions and results can be summarised as follows: • A new interest point based image stitching method has been proposed and investigated. The robustness of interest points has been tested and evaluated. Interest points have been proved robust to changes in lighting, viewpoint, rotation and scale. • A new interest point based method for colour correction has been proposed and investigated. The results of linear and linear plus affine colour transforms have proved more accurate than traditional diagonal transforms in accurately matching colours in panoramic images. • A new structured light based method for correspondence point based 3D reconstruction has been proposed and investigated. The method has been proved to increase the accuracy of the correspondence search for areas with low texture. Correspondence speed has also been increased with a new hybrid feature and area based correspondence search constraint. • Based on the investigation, a software framework has been developed for image based 3D virtual environment construction. The GUI includes abilities for importing images, colour correction, mosaicing, 3D surface reconstruction, texture recovery and visualisation. • 11 research papers have been published.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Composing with Frames and Spaces: Cinematic Virtual Reality as an Audiovisual Compositional Practice

    Get PDF
    This project offers a creative investigation into the medium of Cinematic Virtual Reality, identifying the distinguishing characteristics of the medium as they relate to the technical, thematic and aesthetic language the creative has access to. Drawing primarily on CVR as a cinematic construct, this investigation focuses on two key concepts that differentiate CVR from fixed frame media: frames (the window in which the virtual world is composed and navigated by the viewer) and spaces (the relationship between the viewer and the surrounding virtual environment). The creative portfolio explores many different possible implementations of creative thought in CVR, bringing the world of contemporary electronic and electroacoustic music into the audiovisual medium of CVR. The ideas of frames and spaces are used to structure a discussion of the creative portfolio, allowing this PhD to document the act of composing audiovisual works in CVR that are conceived from the unique communicative properties of the media

    The Post-Screen Through Virtual Reality, Holograms and Light Projections

    Get PDF
    Screens are ubiquitous today. They display information; present image worlds; are portable; connect to mobile networks; mesmerize. However, contemporary screen media also seek to eliminate the presence of the screen and the visibilities of its boundaries. As what is image becomes increasingly indistinguishable against the viewer’s actual surroundings, this unsettling prompts re-examination about not only what is the screen, but also how the screen demarcates and what it stands for in relation to our understanding of our realities in, outside and against images. Through case studies drawn from three media technologies – Virtual Reality; holograms; and light projections – this book develops new theories of the surfaces on and spaces in which images are displayed today, interrogating critical lines between art and life; virtuality and actuality; truth and lies. What we have today is not just the contestation of the real against illusion or the unreal, but the disappearance itself of difference and a gluttony of the unreal which both connect up to current politics of distorted truth values and corrupted terms of information. The Post-Screen Through Virtual Reality, Holograms and Light Projections: Where Screen Boundaries Lie is thus about not only where the image’s borders and demarcations are established, but also the screen boundary as the instrumentation of today’s intense virtualizations that do not tell the truth. In all this, a new imagination for images emerges, with a new space for cultures of presence and absence, definitions of object and representation, and understandings of dis- and re-placement – the post-screen

    The Post-Screen Through Virtual Reality, Holograms and Light Projections

    Get PDF
    Screens are ubiquitous today. They display information; present image worlds; are portable; connect to mobile networks; mesmerize. However, contemporary screen media also seek to eliminate the presence of the screen and the visibilities of its boundaries. As what is image becomes increasingly indistinguishable against the viewer’s actual surroundings, this unsettling prompts re-examination about not only what is the screen, but also how the screen demarcates and what it stands for in relation to our understanding of our realities in, outside and against images. Through case studies drawn from three media technologies – Virtual Reality; holograms; and light projections – this book develops new theories of the surfaces on and spaces in which images are displayed today, interrogating critical lines between art and life; virtuality and actuality; truth and lies. What we have today is not just the contestation of the real against illusion or the unreal, but the disappearance itself of difference and a gluttony of the unreal which both connect up to current politics of distorted truth values and corrupted terms of information. The Post-Screen Through Virtual Reality, Holograms and Light Projections: Where Screen Boundaries Lie is thus about not only where the image’s borders and demarcations are established, but also the screen boundary as the instrumentation of today’s intense virtualizations that do not tell the truth. In all this, a new imagination for images emerges, with a new space for cultures of presence and absence, definitions of object and representation, and understandings of dis- and re-placement – the post-screen

    Low Latency Rendering with Dataflow Architectures

    Get PDF
    The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care
    corecore