625 research outputs found

    Smart Cities: Inverse Design of 3D Urban Procedural Models with Traffic and Weather Simulation

    Get PDF
    Urbanization, the demographic transition from rural to urban, has changed how we envision and share the world. From just one-fourth of the population living in cities one hundred years ago, now more than half of the population does, and this ratio is expected to grow in the near future. Creating more sustainable, accessible, safe, and enjoyable cities has become an imperative

    Material Recognition Meets 3D Reconstruction : Novel Tools for Efficient, Automatic Acquisition Systems

    Get PDF
    For decades, the accurate acquisition of geometry and reflectance properties has represented one of the major objectives in computer vision and computer graphics with many applications in industry, entertainment and cultural heritage. Reproducing even the finest details of surface geometry and surface reflectance has become a ubiquitous prerequisite in visual prototyping, advertisement or digital preservation of objects. However, today's acquisition methods are typically designed for only a rather small range of material types. Furthermore, there is still a lack of accurate reconstruction methods for objects with a more complex surface reflectance behavior beyond diffuse reflectance. In addition to accurate acquisition techniques, the demand for creating large quantities of digital contents also pushes the focus towards fully automatic and highly efficient solutions that allow for masses of objects to be acquired as fast as possible. This thesis is dedicated to the investigation of basic components that allow an efficient, automatic acquisition process. We argue that such an efficient, automatic acquisition can be realized when material recognition "meets" 3D reconstruction and we will demonstrate that reliably recognizing the materials of the considered object allows a more efficient geometry acquisition. Therefore, the main objectives of this thesis are given by the development of novel, robust geometry acquisition techniques for surface materials beyond diffuse surface reflectance, and the development of novel, robust techniques for material recognition. In the context of 3D geometry acquisition, we introduce an improvement of structured light systems, which are capable of robustly acquiring objects ranging from diffuse surface reflectance to even specular surface reflectance with a sufficient diffuse component. We demonstrate that the resolution of the reconstruction can be increased significantly for multi-camera, multi-projector structured light systems by using overlappings of patterns that have been projected under different projector poses. As the reconstructions obtained by applying such triangulation-based techniques still contain high-frequency noise due to inaccurately localized correspondences established for images acquired under different viewpoints, we furthermore introduce a novel geometry acquisition technique that complements the structured light system with additional photometric normals and results in significantly more accurate reconstructions. In addition, we also present a novel method to acquire the 3D shape of mirroring objects with complex surface geometry. The aforementioned investigations on 3D reconstruction are accompanied by the development of novel tools for reliable material recognition which can be used in an initial step to recognize the present surface materials and, hence, to efficiently select the subsequently applied appropriate acquisition techniques based on these classified materials. In the scope of this thesis, we therefore focus on material recognition for scenarios with controlled illumination as given in lab environments as well as scenarios with natural illumination that are given in photographs of typical daily life scenes. Finally, based on the techniques developed in this thesis, we provide novel concepts towards efficient, automatic acquisition systems

    Fehlerkaschierte Bildbasierte Darstellungsverfahren

    Get PDF
    Creating photo-realistic images has been one of the major goals in computer graphics since its early days. Instead of modeling the complexity of nature with standard modeling tools, image-based approaches aim at exploiting real-world footage directly,as they are photo-realistic by definition. A drawback of these approaches has always been that the composition or combination of different sources is a non-trivial task, often resulting in annoying visible artifacts. In this thesis we focus on different techniques to diminish visible artifacts when combining multiple images in a common image domain. The results are either novel images, when dealing with the composition task of multiple images, or novel video sequences rendered in real-time, when dealing with video footage from multiple cameras.Fotorealismus ist seit jeher eines der großen Ziele in der Computergrafik. Anstatt die Komplexität der Natur mit standardisierten Modellierungswerkzeugen nachzubauen, gehen bildbasierte Ansätze den umgekehrten Weg und verwenden reale Bildaufnahmen zur Modellierung, da diese bereits per Definition fotorealistisch sind. Ein Nachteil dieser Variante ist jedoch, dass die Komposition oder Kombination mehrerer Quellbilder eine nichttriviale Aufgabe darstellt und häufig unangenehm auffallende Artefakte im erzeugten Bild nach sich zieht. In dieser Dissertation werden verschiedene Ansätze verfolgt, um Artefakte zu verhindern oder abzuschwächen, welche durch die Komposition oder Kombination mehrerer Bilder in einer gemeinsamen Bilddomäne entstehen. Im Ergebnis liefern die vorgestellten Verfahren neue Bilder oder neue Ansichten einer Bildsammlung oder Videosequenz, je nachdem, ob die jeweilige Aufgabe die Komposition mehrerer Bilder ist oder die Kombination mehrerer Videos verschiedener Kameras darstellt

    Medical image synthesis using generative adversarial networks: towards photo-realistic image synthesis

    Full text link
    This proposed work addresses the photo-realism for synthetic images. We introduced a modified generative adversarial network: StencilGAN. It is a perceptually-aware generative adversarial network that synthesizes images based on overlaid labelled masks. This technique can be a prominent solution for the scarcity of the resources in the healthcare sector

    Die Virtuelle Videokamera: ein System zur Blickpunktsynthese in beliebigen, dynamischen Szenen

    Get PDF
    The Virtual Video Camera project strives to create free viewpoint video from casually captured multi-view data. Multiple video streams of a dynamic scene are captured with off-the-shelf camcorders, and the user can re-render the scene from novel perspectives. In this thesis the algorithmic core of the Virtual Video Camera is presented. This includes the algorithm for image correspondence estimation as well as the image-based renderer. Furthermore, its application in the context of an actual video production is showcased, and the rendering and image processing pipeline is extended to incorporate depth information.Das Virtual Video Camera Projekt dient der Erzeugung von Free Viewpoint Video Ansichten von Multi-View Aufnahmen: Material mehrerer Videoströme wird hierzu mit handelsüblichen Camcordern aufgezeichnet. Im Anschluss kann die Szene aus beliebigen, von den ursprünglichen Kameras nicht abgedeckten Blickwinkeln betrachtet werden. In dieser Dissertation wird der algorithmische Kern der Virtual Video Camera vorgestellt. Dies beinhaltet das Verfahren zur Bildkorrespondenzschätzung sowie den bildbasierten Renderer. Darüber hinaus wird die Anwendung im Kontext einer Videoproduktion beleuchtet. Dazu wird die bildbasierte Erzeugung neuer Blickpunkte um die Erzeugung und Einbindung von Tiefeninformationen erweitert

    Automatic 3D Facial Performance Acquisition and Animation using Monocular Videos

    Get PDF
    Facial performance capture and animation is an essential component of many applications such as movies, video games, and virtual environments. Video-based facial performance capture is particularly appealing as it offers the lowest cost and the potential use of legacy sources and uncontrolled videos. However, it is also challenging because of complex facial movements at different scales, ambiguity caused by the loss of depth information, and a lack of discernible features on most facial regions. Unknown lighting conditions and camera parameters further complicate the problem. This dissertation explores the video-based 3D facial performance capture systems that use a single video camera, overcome the challenges aforementioned, and produce accurate and robust reconstruction results. We first develop a novel automatic facial feature detection/tracking algorithm that accurately locates important facial features across the entire video sequence, which are then used for 3D pose and facial shape reconstruction. The key idea is to combine the respective powers of local detection, spatial priors for facial feature locations, Active Appearance Models (AAMs), and temporal coherence for facial feature detection. The algorithm runs in realtime and is robust to large pose and expression variations and occlusions. We then present an automatic high-fidelity facial performance capture system that works on monocular videos. It uses the detected facial features along with multilinear facial models to reconstruct 3D head poses and large-scale facial deformation, and uses per-pixel shading cues to add fine-scale surface details such as emerging or disappearing wrinkles and folds. We iterate the reconstruction procedure on large-scale facial geometry and fine-scale facial details to improve the accuracy of facial reconstruction. We further improve the accuracy and efficiency of the large-scale facial performance capture by introducing a local binary feature based 2D feature regression and a convolutional neural network based pose and expression regression, and complement it with an efficient 3D eye gaze tracker to achieve realtime 3D eye gaze animation. We have tested our systems on various monocular videos, demonstrating the accuracy and robustness under a variety of uncontrolled lighting conditions and overcoming significant shape differences across individuals

    Digital Multispectral Map Reconstruction Using Aerial Imagery

    Get PDF
    Advances made in the computer vision field allowed for the establishment of faster and more accurate photogrammetry techniques. Structure from Motion(SfM) is a photogrammetric technique focused on the digital spatial reconstruction of objects based on a sequence of images. The benefit of Unmanned Aerial Vehicle (UAV) platforms allowed the ability to acquire high fidelity imagery intended for environmental mapping. This way, UAV platforms became a heavily adopted method of survey. The combination of SfM and the recent improvements of Unmanned Aerial Vehicle (UAV) platforms granted greater flexibility and applicability, opening a new path for a new remote sensing technique aimed to replace more traditional and laborious approaches often associated with high monetary costs. The continued development of digital reconstruction software and advances in the field of computer processing allowed for a more affordable and higher resolution solution when compared to the traditional methods. The present work proposed a digital reconstruction algorithm based on images taken by a UAV platform inspired by the work made available by the open-source project OpenDroneMap. The aerial images are inserted in the computer vision program and several operations are applied to them, including detection and matching of features, point cloud reconstruction, meshing, and texturing, which results in a final product that represents the surveyed site. Additionally, from the study, it was concluded that an implementation which addresses the processing of thermal images was not integrated in the works of OpenDroneMap. By this point, their work was altered to allow for the reconstruction of thermal maps without sacrificing the resolution of the final model. Standard methods to process thermal images required a larger image footprint (or area of ground capture in a frame), the reason for this is that these types of images lack the presence of invariable features and by increasing the image’s footprint, the number of features present in each frame also rises. However, this method of image capture results in a lower resolution of the final product. The algorithm was developed using open-source libraries. In order to validate the obtained results, this model was compared to data obtained from commercial products, like Pix4D. Furthermore, due to circumstances brought about by the current pandemic, it was not possible to conduct a field study for the comparison and assessment of our results, as such the validation of the models was performed by verifying if the geographic location of the model was performed correctly and by visually assessing the generated maps.Avanços no campo da visão computacional permitiu o desenvolvimento de algoritmos mais eficientes de fotogrametria. Structure from Motion (SfM) é uma técnica de fotogrametria que tem como objetivo a reconstrução digital de objectos no espaço derivados de uma sequência de imagens. A característica importante que os Veículos Aérios não-tripulados (UAV) conseguem fornecer, a nível de mapeamento, é a sua capacidade de obter um conjunto de imagens de alta resolução. Devido a isto, UAV tornaram-se num dos métodos adotados no estudo de topografia. A combinação entre SfM e recentes avanços nos UAV permitiram uma melhor flexibilidade e aplicabilidade, permitindo deste modo desenvolver um novo método de Remote Sensing. Este método pretende substituir técnicas tradicionais, as quais estão associadas a mão-de-obra intensiva e a custos monetários elevados. Avanços contínuos feitos em softwares de reconstrução digital e no poder de processamento resultou em modelos de maior resolução e menos dispendiosos comparando a métodos tradicionais. O presente estudo propõe um algoritmo de reconstrução digital baseado em imagens obtidas através de UAV inspiradas no estudo disponibilizado pela OpenDroneMap. Estas imagens são inseridas no programa de visão computacional, onde várias operações são realizadas, incluindo: deteção e correspondência de caracteristicas, geração da point cloud, meshing e texturação dos quais resulta o produto final que representa o local em estudo. De forma complementar, concluiu-se que o trabalho da OpenDroneMap não incluia um processo de tratamento de imagens térmicas. Desta forma, alterações foram efetuadas que permitissem a criação de mapas térmicos sem sacrificar resolução do produto final, pois métodos típicos para processamento de imagens térmicas requerem uma área de captura maior, devido à falta de características invariantes neste tipo de imagens, o que leva a uma redução de resolução. Desta forma, o programa proposto foi desenvolvido através de bibliotecas open-source e os resultados foram comparados com modelos gerados através de software comerciais. Além do mais, devido à situação pandémica atual, não foi possível efetuar um estudo de campo para validar os modelos obtidos, como tal esta verificação foi feita através da correta localização geográfica do modelo, bem como avaliação visual dos modelos criados

    3D DIGITIZATION OF AN HERITAGE MASTERPIECE - A CRITICAL ANALYSIS ON QUALITY ASSESSMENT

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationVolumetric parameterization is an emerging field in computer graphics, where volumetric representations that have a semi-regular tensor-product structure are desired in applications such as three-dimensional (3D) texture mapping and physically-based simulation. At the same time, volumetric parameterization is also needed in the Isogeometric Analysis (IA) paradigm, which uses the same parametric space for representing geometry, simulation attributes and solutions. One of the main advantages of the IA framework is that the user gets feedback directly as attributes of the NURBS model representation, which can represent geometry exactly, avoiding both the need to generate a finite element mesh and the need to reverse engineer the simulation results from the finite element mesh back into the model. Research in this area has largely been concerned with issues of the quality of the analysis and simulation results assuming the existence of a high quality volumetric NURBS model that is appropriate for simulation. However, there are currently no generally applicable approaches to generating such a model or visualizing the higher order smooth isosurfaces of the simulation attributes, either as a part of current Computer Aided Design or Reverse Engineering systems and methodologies. Furthermore, even though the mesh generation pipeline is circumvented in the concept of IA, the quality of the model still significantly influences the analysis result. This work presents a pipeline to create, analyze and visualize NURBS geometries. Based on the concept of analysis-aware modeling, this work focusses in particular on methodologies to decompose a volumetric domain into simpler pieces based on appropriate midstructures by respecting other relevant interior material attributes. The domain is decomposed such that a tensor-product style parameterization can be established on the subvolumes, where the parameterization matches along subvolume boundaries. The volumetric parameterization is optimized using gradient-based nonlinear optimization algorithms and datafitting methods are introduced to fit trivariate B-splines to the parameterized subvolumes with guaranteed order of accuracy. Then, a visualization method is proposed allowing to directly inspect isosurfaces of attributes, such as the results of analysis, embedded in the NURBS geometry. Finally, the various methodologies proposed in this work are demonstrated on complex representations arising in practice and research
    corecore