168 research outputs found

    Scalable Realtime Rendering and Interaction with Digital Surface Models of Landscapes and Cities

    Get PDF
    Interactive, realistic rendering of landscapes and cities differs substantially from classical terrain rendering. Due to the sheer size and detail of the data which need to be processed, realtime rendering (i.e. more than 25 images per second) is only feasible with level of detail (LOD) models. Even the design and implementation of efficient, automatic LOD generation is ambitious for such out-of-core datasets considering the large number of scales that are covered in a single view and the necessity to maintain screen-space accuracy for realistic representation. Moreover, users want to interact with the model based on semantic information which needs to be linked to the LOD model. In this thesis I present LOD schemes for the efficient rendering of 2.5d digital surface models (DSMs) and 3d point-clouds, a method for the automatic derivation of city models from raw DSMs, and an approach allowing semantic interaction with complex LOD models. The hierarchical LOD model for digital surface models is based on a quadtree of precomputed, simplified triangle mesh approximations. The rendering of the proposed model is proved to allow real-time rendering of very large and complex models with pixel-accurate details. Moreover, the necessary preprocessing is scalable and fast. For 3d point clouds, I introduce an LOD scheme based on an octree of hybrid plane-polygon representations. For each LOD, the algorithm detects planar regions in an adequately subsampled point cloud and models them as textured rectangles. The rendering of the resulting hybrid model is an order of magnitude faster than comparable point-based LOD schemes. To automatically derive a city model from a DSM, I propose a constrained mesh simplification. Apart from the geometric distance between simplified and original model, it evaluates constraints based on detected planar structures and their mutual topological relations. The resulting models are much less complex than the original DSM but still represent the characteristic building structures faithfully. Finally, I present a method to combine semantic information with complex geometric models. My approach links the semantic entities to the geometric entities on-the-fly via coarser proxy geometries which carry the semantic information. Thus, semantic information can be layered on top of complex LOD models without an explicit attribution step. All findings are supported by experimental results which demonstrate the practical applicability and efficiency of the methods

    A volume filtering and rendering system for an improved visual balance of feature preservation and noise suppression in medical imaging

    Get PDF
    Preserving or enhancing salient features whilst effectively suppressing noise-derived artifacts and extraneous detail have been two consistent yet competing objectives in volumetric medical image processing. Illustrative techniques (and methods inspired by them) can help to enhance and, if desired, isolate the depiction of specific regions of interest whilst retaining overall context. However, highlighting or enhancing specific features can have the undesirable side-effect of highlighting noise. Second-derivative based methods can be employed effectively in both the rendering and volume filtering stages of a visualisation pipeline to enhance the depiction of feature detail whilst minimising noise-based artifacts. We develop a new 3D anisotropic-diffusion PDE for an improved balance of feature-retention and noise reduction; furthermore, we present a feature-enhancing visualisation pipeline that can be applied to multiple modalities and has been shown to be particularly effective in the context of 3D ultrasound

    Temporally Coherent Video Stylization

    Get PDF
    International audienceThe transformation of video clips into stylized animations remains an active research topic in Computer Graphics. A key challenge is to reproduce the look of traditional artistic styles whilst minimizing distracting flickering and sliding artifacts; i.e. with temporal coherence. This chapter surveys the spectrum of available video stylization techniques, focusing on algorithms encouraging the temporally coherent placement of rendering marks, and discusses the trade-offs necessary to achieve coherence. We begin with flow-based adaptations of stroke based rendering (SBR) and texture advection capable of painting video. We then chart the development of the field, and its fusion with Computer Vision, to deliver coherent mid-level scene representations. These representations enable the rotoscoping of rendering marks on to temporally coherent video regions, enhancing the diversity and temporal coherence of stylization. In discussing coherence, we formalize the problem of temporal coherence in terms of three defined criteria, and compare and contrast video stylization using these

    Filtering Techniques for Low-Noise Previews of Interactive Stochastic Ray Tracing

    Get PDF
    Progressive stochastic ray tracing is increasingly used in interactive applications. Examples of such applications are interactive design reviews and digital content creation. This dissertation aims at advancing this development. For one thing, two filtering techniques are presented, which can generate fast and reliable previews of global illumination solutions. For another thing, a system architecture is presented, which supports exchangeable rendering back-ends in distributed rendering systems

    Toward a Perceptually-relevant Theory of Appearance

    Get PDF
    Two approaches are commonly employed in Computer Graphics to design and adjust the appearance of objects in a scene. A full 3D environment may be created, through geometrical, material and lighting modeling, then rendered using a simulation of light transport; appearance is then controlled in ways similar to photography. A radically different approach consists in providing 2D digital drawing tools to an artist, whom with enough talent and time will be able to create images of objects having the desired appearance; this is obviously strongly similar to what traditional artists do, with the computer being a mere modern drawing tool.In this document, I present research projects that have investigated a third approach, whereby pictorial elements of appearance are explicitly manipulated by an artist. On the one side, such an alternative approach offers a direct control over appearance, with novel applications in vector drawing, scientific illustration, special effects and video games. On the other side, it provides an modern method for putting our current knowledge of the perception of appearance to the test, as well as to suggest new models for human vision along the way

    Photorealistic rendering: a survey on evaluation

    Get PDF
    This article is a systematic collection of existing methods and techniques for evaluating rendering category in the field of computer graphics. The motive for doing this study was the difficulty of selecting appropriate methods for evaluating and validating specific results reported by many researchers. This difficulty lies in the availability of numerous methods and lack of robust discussion of them. To approach such problems, the features of well-known methods are critically reviewed to provide researchers with backgrounds on evaluating different styles in photo-realistic rendering part of computer graphics. There are many ways to evaluating a research. For this article, classification and systemization method is use. After reviewing the features of different methods, their future is also discussed. Finally, dome pointers are proposed as to the likely future issues in evaluating the research on realistic rendering. It is expected that this analysis helps researchers to overcome the difficulties of evaluation not only in research, but also in application

    A system for image-based modeling and photo editing

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.Includes bibliographical references (p. 169-178).Traditionally in computer graphics, a scene is represented by geometric primitives composed of various materials and a collection of lights. Recently, techniques for modeling and rendering scenes from a set of pre-acquired images have emerged as an alternative approach, known as image-based modeling and rendering. Much of the research in this field has focused on reconstructing and rerendering from a set of photographs, while little work has been done to address the problem of editing and modifying these scenes. On the other hand, photo-editing systems, such as Adobe Photoshop, provide a powerful, intuitive, and practical means to edit images. However, these systems are limited by their two-dimensional nature. In this thesis, we present a system that extends photo editing to 3D. Starting from a single input image, the system enables the user to reconstruct a 3D representation of the captured scene, and edit it with the ease and versatility of 2D photo editing. The scene is represented as layers of images with depth, where each layer is an image that encodes both color and depth. A suite of user-assisted tools are employed, based on a painting metaphor, to extract layers and assign depths. The system enables editing from different viewpoints, extracting and grouping of image-based objects, and modifying the shape, color, and illumination of these objects. As part of the system, we introduce three powerful new editing tools. These include two new clone brushing tools: the non-distorted clone brush and the structure-preserving clone brush. They permit copying of parts of an image to another via a brush interface, but alleviate distortions due to perspective foreshortening and object geometry.(cont.) The non-distorted clone brush works on arbitrary 3D geometry, while the structure-preserving clone brush, a 2D version, assumes a planar surface, but has the added advantage of working directly in 2D photo-editing systems that lack depth information. The third tool, a texture-illuminance decoupling filter, discounts the effect of illumination on uniformly textured areas by decoupling large- and small-scale features via bilateral filtering. This tool is crucial for relighting and changing the materials of the scene. There are many applications for such a system, for example architectural, lighting and landscape design, entertainment and special effects, games, and virtual TV sets. The system allows the user to superimpose scaled architectural models into real environments, or to quickly paint a desired lighting scheme of an interior, while being able to navigate within the scene for a fully immersive 3D experience. We present examples and results of complex architectural scenes, 360-degree panoramas, and even paintings, where the user can change viewpoints, edit the geometry and materials, and relight the environment.by Byong Mok Oh.Ph.D

    Utilisation de l'Apparence pour le Rendu et l'édition efficaces de scènes capturées

    Get PDF
    Computer graphics strives to render synthetic images identical to real photographs. Multiple rendering algorithms have been developed for the better part of the last half-century. Traditional algorithms use 3D assets manually generated by artists to render a scene. While the initial scenes were quite simple, the field has developed complex representations of geometry, material and lighting: the three basic components of a 3D scene. Generating such complex assets is hard and requires significant time and skills by professional 3D artists. In addition to asset generation, the rendering algorithms themselves involve complex simulation techniques to solve for global light transport in a scene which costs more time.As the ease of capturing photographs improved, Image-based Rendering (IBR) emerged as an alternative to traditional rendering. Using captured images as input became much faster than generating traditional scene assets. Initial IBR algorithms focused on creating a scene model using the input images to interpolate or warp them and enable free-viewpoint navigation of captured scenes. With time the scene models became more complex and using a geometric proxy computed from the input images became an integral part of IBR. Today using a mesh reconstructed using Structure-from-Motion (SfM) and Multi-view Stereo (MVS) techniques is widely used in IBR even though they introduce significant artifacts due to noisy reconstruction.In this thesis we first propose a novel image-based rendering algorithm, which focuses on rendering a captured scene with good quality at interactive frame rates}. We study different artifacts from previous IBR algorithms and propose an algorithm which builds upon previous work to remove such artifacts. The algorithm utilizes surface appearance in order to treat view-dependent regions differently than diffuse regions. Our Hybrid-IBR algorithm performs favorably against classical and modern IBR approaches for a wide variety of scenes in terms of quality and/or speed.While IBR provides solutions to render a scene, editing them is hard. Editing scenes require estimating a scene's geometry, material appearance and illumination. As our second contribution \textbf{we explicitly estimate \emph{scene-scale} material parameters from a set of captured photographs to enable scene editing}. While commercial photogrammetry solutions recover diffuse texture to aid 3D artists in generating material assets manually, we aim to \emph{automatically} create material texture atlases from captured images of a scene. We take advantage of the visual cues provided by the multi-view observations. Feeding it to a Convolutional Neural Network (CNN) we obtain material maps for each view. Using the predicted maps we create multi-view consistent material texture atlases by aggregating the information in texture space. Using our automatically generated material texture atlases we demonstrate relighting and object insertion in real scenes.Learning-based tasks require large amounts of data with variety to learn the task efficiently. Using synthetic datasets to train is the norm but using traditional rendering to render large datasets is time consuming providing limited variability. We propose \textbf{a new neural rendering-based approach that learns a neural scene representation with variability and use it to generate large amounts of data at a significantly faster rate on the fly}. We demonstrate the advantage of using neural rendering as compared to traditional rendering in terms of speed of generating dataset as well as learning auxiliary tasks given the same computational budget.L’informatique graphique a pour but de rendre des images de synthèse semblables à des photographies. Plusieurs algorithmes de rendu ont été développés au cours du dernier demi-siècle, principalement pour restituer des scènes à base d'éléments 3D créés par des artistes. Alors que les scènes initiales étaient assez simples, des représentations plus complexes de la géométrie, des matériaux et de l'éclairage ont été développés. Créer des scènes aussi complexes nécessite beaucoup de travail et de compétences de la part d'artistes 3D professionnels. Au même temps, les algorithmes de rendu impliquent des techniques de simulation complexes coûteuses en temps, pour résoudre le transport global de la lumière dans une scène.Avec la popularité grandissante de la photo numérique, le rendu basé image (IBR) a émergé comme une alternative au rendu traditionnel. Avec cette approche, l'utilisation de photos comme données d'entrée est devenue beaucoup plus rapide que la génération de scènes classiques. Les algorithmes IBR se sont d’abord concentrés sur la restitution de scènes pour en permettre une exploration libre. Au fil du temps, les modèles de scène sont devenus plus complexes et l'utilisation d'un proxy géométrique inféré à partir d’images est devenue la norme. Aujourd'hui, l'utilisation d'un maillage reconstruit à l'aide des techniques Structure-from-Motion (SfM) et Multi-view Stereo (MVS) est courante en IBR, bien que cette utilisation introduit des artefacts importants. Nous proposons d'abord un nouvel algorithme de rendu basé image, qui se concentre sur le rendu de qualité et en temps interactif d'une scène capturée}. Nous étudions différentes faiblesses des travaux précédents et proposons un algorithme qui s'appuie sur ces travaux pour obtenir de meilleurs résultats. Notre algorithme se base sur l'apparence de la surface pour traiter les régions dont l'apparence dépend de l'angle de vue différemment des régions diffuses. Hybrid-IBR obtient des résultats favorables par rapport aux approches concurrentes pour une grande variété de scènes en termes de qualité et/ou de vitesse.Bien que l'IBR soit une bonne solution de rendu, l'édition de celle-ci est difficile sans une décomposition en différents éléments : la géométrie, l'apparence des matériaux et l'éclairage de la scène. Pour notre deuxième contribution, \textbf{nous estimons explicitement les paramètres de matériaux à \emph{l'échelle de la scène} à partir d'un ensemble de photographies, pour permettre l'édition de la scène}. Alors que les solutions de photogrammétrie commerciales calculent la texture diffuse pour assister la création manuelle de matériaux, nous visons à créer \emph{automatiquement} des atlas de texture de matériaux à partir d'un ensemble d'images d'une scène. Nous nous appuyons sur les informations fournis par ces images et les transmettons à un réseau neuronal convolutif pour obtenir des cartes de matériaux pour chaque vue. En utilisant toutes ces prédictions, nous créons des atlas de texture de matériau cohérents pour toutes les vues en agrégeant les informations dans l'espace texture. Nous démontrons l'utilisation de notre atlas de texture de matériaux généré automatiquement pour rendre des scènes réelles avec un changement d’illumination et avec des objets virtuels insérés.L'apprentissage profond nécessite de grandes quantités de données variées. L'utilisation de données synthétiques est courante, mais l'utilisation du rendu traditionnel pour créer ces données prend du temps et offre une variabilité limitée. Nous proposons \textbf{une nouvelle approche basée sur le rendu neuronal qui apprend une représentation de scène neuronale avec paramètres variables, et l'utilise pour générer au vol de grandes quantités de données à un rythme beaucoup plus rapide}. Nous démontrons l'avantage d'utiliser le rendu neuronal par rapport au rendu traditionnel en termes de budget de temps, ainsi que pour l'apprentissage de tâches auxiliaires avec le même budget de calcul

    A workflow for designing stylized shading effects

    Get PDF
    In this report, we describe a workflow for designing stylized shading effects on a 3D object, targeted at technical artists. Shading design, the process of making the illumination of an object in a 3D scene match an artist vision, is usually a time-consuming task because of the complex interactions between materials, geometry, and lighting environment. Physically based methods tend to provide an intuitive and coherent workflow for artists, but they are of limited use in the context of non-photorealistic shading styles. On the other hand, existing stylized shading techniques are either too specialized or require considerable hand-tuning of unintuitive parameters to give a satisfactory result. Our contribution is to separate the design process of individual shading effects in three independent stages: control of its global behavior on the object, addition of procedural details, and colorization. Inspired by the formulation of existing shading models, we expose different shading behaviors to the artist through parametrizations, which have a meaningful visual interpretation. Multiple shading effects can then be composited to obtain complex dynamic appearances. The proposed workflow is fully interactive, with real-time feedback, and allows the intuitive exploration of stylized shading effects, while keeping coherence under varying viewpoints and light configurations. Furthermore, our method makes use of the deferred shading technique, making it easily integrable in existing rendering pipelines.Dans ce rapport, nous décrivons un outil de création de modèles d'illumination adapté à la stylisation de scènes 3D. Contrairement aux modèles d'illumination photoréalistes, qui suivent des contraintes physiques, les modèles d'illumination stylisés répondent à des contraintes artistiques, souvent inspirées de la représentation de la lumière en illustration. Pour cela, la conception de ces modèles stylisés est souvent complexe et coûteuse en temps. De plus, ils doivent produire un résultat cohérent sous une multitude d'angles de vue et d'éclairages. Nous proposons une méthode qui facilite la création d'effets d'illumination stylisés, en décomposant le processus en trois parties indépendantes: contrôle du comportement global de l'illumination, ajout de détails procéduraux, et colorisation.Différents comportements d'illumination sont accessibles à travers des paramétrisations, qui ont une interprétation visuelle, et qui peuvent être combinées pour obtenir des apparences plus complexes. La méthode proposée est interactive, et permet l'exploration efficace de modèles d'illumination stylisés. La méthode est implémentée avec la technique de deferred shading, ce qui la rend facilement utilisable dans des pipelines de rendu existants
    corecore