805 research outputs found

    Exploring the Design Space of Immersive Urban Analytics

    Full text link
    Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices, such as HTC VIVE, Oculus Rift, and Microsoft HoloLens. These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence. In this paper, we propose an theoretical model to characterize the visualizations in immersive urban analytics. Further more, based on our comprehensive and concise model, we contribute a typology of combination methods of 2D and 3D visualizations that distinguish between linked views, embedded views, and mixed views. We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations. Finally, based on existing works, possible future research opportunities are explored and discussed.Comment: 23 pages,11 figure

    Visualizing the dynamics of London's bicycle hire scheme

    Get PDF
    Visualizing flows between origins and destinations can be straightforward when dealing with small numbers of journeys or simple geographies. Representing flows as lines embedded in geographic space has commonly been used to map transport flows, especially when geographic patterns are important as they are when characterising cities or managing transportation. However, for larger numbers of flows, this approach requires careful design to avoid problems of occlusion, salience bias and information overload. Driven by the requirements identified by users and managers of the London Bicycle Hire scheme we present three methods of representation of bicycle hire use and travel patterns. Flow maps with curved flow symbols are used to show overviews in flow structures. Gridded views of docking station location that preserve geographic relationships are used to explore docking station status over space and time in a graphically efficient manner. Origin-Destination maps that visualise the OD matrix directly while maintaining geographic context are used to provide visual details on demand. We use these approaches to identify changes in travel behaviour over space and time, to aid station rebalancing and to provide a framework for incorporating travel modelling and simulation

    Conditional Image-to-Video Generation with Latent Flow Diffusion Models

    Full text link
    Conditional image-to-video (cI2V) generation aims to synthesize a new plausible video starting from an image (e.g., a person's face) and a condition (e.g., an action class label like smile). The key challenge of the cI2V task lies in the simultaneous generation of realistic spatial appearance and temporal dynamics corresponding to the given image and condition. In this paper, we propose an approach for cI2V using novel latent flow diffusion models (LFDM) that synthesize an optical flow sequence in the latent space based on the given condition to warp the given image. Compared to previous direct-synthesis-based works, our proposed LFDM can better synthesize spatial details and temporal motion by fully utilizing the spatial content of the given image and warping it in the latent space according to the generated temporally-coherent flow. The training of LFDM consists of two separate stages: (1) an unsupervised learning stage to train a latent flow auto-encoder for spatial content generation, including a flow predictor to estimate latent flow between pairs of video frames, and (2) a conditional learning stage to train a 3D-UNet-based diffusion model (DM) for temporal latent flow generation. Unlike previous DMs operating in pixel space or latent feature space that couples spatial and temporal information, the DM in our LFDM only needs to learn a low-dimensional latent flow space for motion generation, thus being more computationally efficient. We conduct comprehensive experiments on multiple datasets, where LFDM consistently outperforms prior arts. Furthermore, we show that LFDM can be easily adapted to new domains by simply finetuning the image decoder. Our code is available at https://github.com/nihaomiao/CVPR23_LFDM.Comment: CVPR 202

    Algorithmen zur Korrespondenzschätzung und Bildinterpolation für die photorealistische Bildsynthese

    Get PDF
    Free-viewpoint video is a new form of visual medium that has received considerable attention in the last 10 years. Most systems reconstruct the geometry of the scene, thus restricting themselves to synchronized multi-view footage and Lambertian scenes. In this thesis we follow a different approach and describe contributions to a purely image-based end-to-end system operating on sparse, unsynchronized multi-view footage. In particular, we focus on dense correspondence estimation and synthesis of in-between views. In contrast to previous approaches, our correspondence estimation is specifically tailored to the needs of image interpolation; our multi-image interpolation technique advances the state-of-the-art by disposing the conventional blending step. Both algorithms are put to work in an image-based free-viewpoint video system and we demonstrate their applicability to space-time visual effects production as well as to stereoscopic content creation.3D-Video mit Blickpunktnavigation ist eine neues digitales Medium welchem die Forschung in den letzten 10 Jahren viel Aufmerksamkeit gewidmet hat. Die meisten Verfahren rekonstruieren dabei die Szenengeometrie und schränken sich somit auf Lambertsche Szenen und synchron aufgenommene Eingabedaten ein. In dieser Dissertation beschreiben wir Beiträge zu einem rein bild-basierten System welches auf unsynchronisierten Eingabevideos arbeitet. Unser Fokus liegt dabei auf der Schätzung dichter Korrespondenzkarten und auf der Synthese von Zwischenbildern. Im Gegensatz zu bisherigen Verfahren ist unser Ansatz der Korrespondenzschätzung auf die Bedürfnisse der Bilderinterpolation ausgerichtet; unsere Zwischenbildsynthese verzichtet auf das Überblenden der Eingabebilder zu Gunsten der Lösung eines Labelingproblems. Das resultierende System eignet sich sowohl zur Produktion räumlich-zeitlicher Spezialeffekte als auch zur Erzeugung stereoskopischer Videosequenzen

    3D Cadastres Best Practices, Chapter 5: Visualization and New Opportunities

    Get PDF
    This paper proposes a discussion on opportunities offered by 3D visualization to improve the understanding and the analysis of cadastre data. It first introduce the rationale of having 3D visualization functionalities in the context of cadastre applications. Second the publication outline some basic concepts in 3D visualization. This section specially addresses the visualization pipeline as a driven classification schema to understand the steps leading to 3D visualization. In this section is also presented a brief review of current 3D standards and technologies. Next is proposed a summary of progress made in the last years in 3D cadastral visualization. For instance, user’s requirement, data and semiotics, and platforms are highlighted as main actions performed in the development of 3D cadastre visualization. This review could be perceived as an attempt to structure and emphasise the best practices in the domain of 3D cadastre visualization and as an inventory of issues that still need to be tackled. Finally, by providing a review on advances and trends in 3D visualization, the paper initiates a discussion and a critical analysis on the benefit of applying these new developments to cadastre domain. This final section discusses about enhancing 3D techniques as dynamic transparency and cutaway, 3D generalization, 3D visibility model, 3D annotation, 3D data and web platform, augmented reality, immersive virtual environment, 3D gaming, interaction techniques and time

    Capture, Reconstruction, and Representation of the Visual Real World for Virtual Reality

    Get PDF
    We provide an overview of the concerns, current practice, and limitations for capturing, reconstructing, and representing the real world visually within virtual reality. Given that our goals are to capture, transmit, and depict complex real-world phenomena to humans, these challenges cover the opto-electro-mechanical, computational, informational, and perceptual fields. Practically producing a system for real-world VR capture requires navigating a complex design space and pushing the state of the art in each of these areas. As such, we outline several promising directions for future work to improve the quality and flexibility of real-world VR capture systems
    corecore