921 research outputs found

    Virtual Exploration of Underwater Archaeological Sites : Visualization and Interaction in Mixed Reality Environments

    Get PDF
    This paper describes the ongoing developments in Photogrammetry and Mixed Reality for the Venus European project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu). The main goal of the project is to provide archaeologists and the general public with virtual and augmented reality tools for exploring and studying deep underwater archaeological sites out of reach of divers. These sites have to be reconstructed in terms of environment (seabed) and content (artifacts) by performing bathymetric and photogrammetric surveys on the real site and matching points between geolocalized pictures. The base idea behind using Mixed Reality techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine but drastically differ in the way they present information. General public activities emphasize the visually and auditory realistic aspect of the reconstruction while archaeologists activities emphasize functional aspects focused on the cargo study rather than realism which leads to the development of two parallel VR demonstrators. This paper will focus on several key points developed for the reconstruction process as well as both VR demonstrators (archaeological and general public) issues. The ?rst developed key point concerns the densi?cation of seabed points obtained through photogrammetry in order to obtain high quality terrain reproduction. The second point concerns the development of the Virtual and Augmented Reality (VR/AR) demonstrators for archaeologists designed to exploit the results of the photogrammetric reconstruction. And the third point concerns the development of the VR demonstrator for general public aimed at creating awareness of both the artifacts that were found and of the process with which they were discovered by recreating the dive process from ship to seabed

    CONSISTENT MULTI-VIEW TEXTURING OF DETAILED 3D SURFACE MODELS

    Get PDF

    TwinTex: Geometry-aware Texture Generation for Abstracted 3D Architectural Models

    Full text link
    Coarse architectural models are often generated at scales ranging from individual buildings to scenes for downstream applications such as Digital Twin City, Metaverse, LODs, etc. Such piece-wise planar models can be abstracted as twins from 3D dense reconstructions. However, these models typically lack realistic texture relative to the real building or scene, making them unsuitable for vivid display or direct reference. In this paper, we present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy. Our method addresses most challenges occurring in such twin texture generation. Specifically, for each primitive plane, we first select a small set of photos with greedy heuristics considering photometric quality, perspective quality and facade texture completeness. Then, different levels of line features (LoLs) are extracted from the set of selected photos to generate guidance for later steps. With LoLs, we employ optimization algorithms to align texture with geometry from local to global. Finally, we fine-tune a diffusion model with a multi-mask initialization component and a new dataset to inpaint the missing region. Experimental results on many buildings, indoor scenes and man-made objects of varying complexity demonstrate the generalization ability of our algorithm. Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort. Project page: https://vcc.tech/research/2023/TwinTex.Comment: Accepted to SIGGRAPH ASIA 202

    Real time color projection for 3d models

    Get PDF
    In this work, we present a solution for interactive visualization of virtual objects composed of a 3D model and a set of calibrated photographies. Our approach selects, projects and blends the photos based on a few criteria in order to improve perception of details while maintaining an interactive performance. It works as a dynamic texture map generator, where for each new view position and direction the best combination of the photos is sought. The main advantage of our technique is that it tries to preserve the original photo information as best as possible. Furthermore, the proposed method were compared with a popular texture mapping technique. Our method produced less artifacts in general, and was able to handle better large and non uniform datasets.Essa dissertação apresenta uma solução o para visualizar, em tempo real, datasets compostos por um modelo 3D e um conjunto de fotos calibradas. Nossa solução seleciona, projeta e compõe as fotografias em função da posição e da direção da câmera de forma a maximizar a percepção de detalhes e, ao mesmo tempo, atingir taxas interativas de visualização. O método funciona como um gerador dinâmico de texturas, onde para cada novo ponto de vista a melhor combinação das fotos é buscada. A principal vantagem da nossa abordagem é tentar preservar as informações originais das fotos da melhor forma possível. Além disso, os resultados do método proposto foi comparado com o tradicional texture mapping. Revelando, assim, mais precisão e menos artefatos para datasets extensos com câmeras distribuídas não uniformemente
    corecore