12,154 research outputs found

    Fusing Multimedia Data Into Dynamic Virtual Environments

    Get PDF
    In spite of the dramatic growth of virtual and augmented reality (VR and AR) technology, content creation for immersive and dynamic virtual environments remains a significant challenge. In this dissertation, we present our research in fusing multimedia data, including text, photos, panoramas, and multi-view videos, to create rich and compelling virtual environments. First, we present Social Street View, which renders geo-tagged social media in its natural geo-spatial context provided by 360° panoramas. Our system takes into account visual saliency and uses maximal Poisson-disc placement with spatiotemporal filters to render social multimedia in an immersive setting. We also present a novel GPU-driven pipeline for saliency computation in 360° panoramas using spherical harmonics (SH). Our spherical residual model can be applied to virtual cinematography in 360° videos. We further present Geollery, a mixed-reality platform to render an interactive mirrored world in real time with three-dimensional (3D) buildings, user-generated content, and geo-tagged social media. Our user study has identified several use cases for these systems, including immersive social storytelling, experiencing the culture, and crowd-sourced tourism. We next present Video Fields, a web-based interactive system to create, calibrate, and render dynamic videos overlaid on 3D scenes. Our system renders dynamic entities from multiple videos, using early and deferred texture sampling. Video Fields can be used for immersive surveillance in virtual environments. Furthermore, we present VRSurus and ARCrypt projects to explore the applications of gestures recognition, haptic feedback, and visual cryptography for virtual and augmented reality. Finally, we present our work on Montage4D, a real-time system for seamlessly fusing multi-view video textures with dynamic meshes. We use geodesics on meshes with view-dependent rendering to mitigate spatial occlusion seams while maintaining temporal consistency. Our experiments show significant enhancement in rendering quality, especially for salient regions such as faces. We believe that Social Street View, Geollery, Video Fields, and Montage4D will greatly facilitate several applications such as virtual tourism, immersive telepresence, and remote education

    Visual communication in urban planning and urban design

    Get PDF
    This report documents the current status of visual communication in urban design and planning. Visual communication is examined through discussion of standalone and network media, specifically concentrating on visualisation on the World Wide Web(WWW).Firstly, we examine the use of Solid and Geometric Modelling for visualising urban planning and urban design. This report documents and compares examples of the use of Virtual Reality Modelling Language (VRML) and proprietary WWW based Virtual Reality modelling software. Examples include the modelling of Bath and Glasgow using both VRML 1.0 and 2.0. A review is carried out on the use of Virtual Worldsand their role in visualising urban form within multi-user environments. The use of Virtual Worlds is developed into a case study of the possibilities and limitations of Virtual Internet Design Arenas (ViDAs), an initiative undertaken at the Centre for Advanced Spatial Analysis, University College London. The use of Virtual Worlds and their development towards ViDAs is seen as one of the most important developments in visual communication for urban planning and urban design since the development plan.Secondly, photorealistic media in the process of communicating plans is examined.The process of creating photorealistic media is documented, examples of the Virtual Streetscape and Wired Whitehall Virtual Urban Interface System are provided. The conclusion is drawn that although the use of photo-realistic media on the WWW provides a way to visually communicate planning information, its use is limited. The merging of photorealistic media and solid geometric modelling is reviewed in the creation of Augmented Reality. Augmented Reality is seen to provide an important step forward in the ability to quickly and easily visualise urban planning and urban design information.Thirdly, the role of visual communication of planning data through GIS is examined interms of desktop, three dimensional and Internet based GIS systems. The evolution to Internet GIS is seen as a critical component in the development of virtual cities which will allow urban planners and urban designers to visualise and model the complexity of the built environment in networked virtual reality.Finally a viewpoint is put forward of the Virtual City, linking Internet GIS with photorealistic multi-user Virtual Worlds. At present there are constraints on how far virtual cities can be developed, but a view is provided on how these networked virtual worlds are developing to aid visual communication in urban planning and urban design

    Scalable 3D video of dynamic scenes

    Get PDF
    In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space-time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effect

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Creating and controlling visual environments using BonVision.

    Get PDF
    Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices
    • …
    corecore