385 research outputs found

    Visual Human-Computer Interaction

    Get PDF

    Non-photorealistic rendering: a critical examination and proposed system.

    Get PDF
    In the first part of the program the emergent field of Non-Photorealistic Rendering is explored from a cultural perspective. This is to establish a clear understanding of what Non-Photorealistic Rendering (NPR) ought to be in its mature form in order to provide goals and an overall infrastructure for future development. This thesis claims that unless we understand and clarify NPR's relationship with other media (photography, photorealistic computer graphics and traditional media) we will continue to manufacture "new solutions" to computer based imaging which are confused and naive in their goals. Such solutions will be rejected by the art and design community, generally condemned as novelties of little cultural worth ( i.e. they will not sell). This is achieved by critically reviewing published systems that are naively described as Non-photorealistic or "painterly" systems. Current practices and techniques are criticised in terms of their low ability to articulate meaning in images; solutions to this problem are given. A further argument claims that NPR, while being similar to traditional "natural media" techniques in certain aspects, is fundamentally different in other ways. This similarity has lead NPR to be sometimes proposed as "painting simulation" — something it can never be. Methods for avoiding this position are proposed. The similarities and differences to painting and drawing are presented and NPR's relationship to its other counterpart, Photorealistic Rendering (PR), is then delineated. It is shown that NPR is paradigmatically different to other forms of representation — i.e. it is not an "effect", but rather something basically different. The benefits of NPR in its mature form are discussed in the context of Architectural Representation and Design in general. This is done in conjunction with consultations with designers and architects. From this consultation a "wish-list" of capabilities is compiled by way of a requirements capture for a proposed system. A series of computer-based experiments resulting in the systems "Expressive Marks" and 'Magic Painter" are carried out; these practical experiments add further understanding to the problems of NPR. The exploration concludes with a prototype system "Piranesi" which is submitted as a good overall solution to the problem of NPR. In support of this written thesis are : - • The Expressive Marks system • Magic Painter system • The Piranesi system (which includes the EPixel and Sketcher systems) • A large portfolio of images generated throughout the exploration

    Towards Interactive Photorealistic Rendering

    Get PDF

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    Higher level techniques for the artistic rendering of images and video

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Implementing non-photorealistic rendreing enhancements with real-time performance

    Get PDF
    We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways.KMBT_363Adobe Acrobat 9.54 Paper Capture Plug-i

    Analyse de l'espace des chemins pour la composition des ombres et lumières

    Get PDF
    La réalisation des films d'animation 3D s'appuie de nos jours sur les techniques de rendu physiquement réaliste, qui simulent la propagation de la lumière dans chaque scène. Dans ce contexte, les graphistes 3D doivent jouer avec les effets de lumière pour accompagner la mise en scène, dérouler la narration du film, et transmettre son contenu émotionnel aux spectateurs. Cependant, les équations qui modélisent le comportement de la lumière laissent peu de place à l'expression artistique. De plus, l'édition de l'éclairage par essai-erreur est ralentie par les longs temps de rendu associés aux méthodes physiquement réalistes, ce qui rend fastidieux le travail des graphistes. Pour pallier ce problème, les studios d'animation ont souvent recours à la composition, où les graphistes retravaillent l'image en associant plusieurs calques issus du processus de rendu. Ces calques peuvent contenir des informations géométriques sur la scène, ou bien isoler un effet lumineux intéressant. L'avantage de la composition est de permettre une interaction en temps réel, basée sur les méthodes classiques d'édition en espace image. Notre contribution principale est la définition d'un nouveau type de calque pour la composition, le calque d'ombre. Un calque d'ombre contient la quantité d'énergie perdue dans la scène à cause du blocage des rayons lumineux par un objet choisi. Comparée aux outils existants, notre approche présente plusieurs avantages pour l'édition. D'abord, sa signification physique est simple à concevoir : lorsque l'on ajoute le calque d'ombre et l'image originale, toute ombre due à l'objet choisi disparaît. En comparaison, un masque d'ombre classique représente la fraction de rayons bloqués en chaque pixel, une information en valeurs de gris qui ne peut servir que d'approximation pour guider la composition. Ensuite, le calque d'ombre est compatible avec l'éclairage global : il enregistre l'énergie perdue depuis les sources secondaires, réfléchies au moins une fois dans la scène, là où les méthodes actuelles ne considèrent que les sources primaires. Enfin, nous démontrons l'existence d'une surestimation de l'éclairage dans trois logiciels de rendu différents lorsque le graphiste désactive les ombres pour un objet ; notre définition corrige ce défaut. Nous présentons un prototype d'implémentation des calques d'ombres à partir de quelques modifications du Path Tracing, l'algorithme de choix en production. Il exporte l'image originale et un nombre arbitraire de calques d'ombres liés à différents objets en une passe de rendu, requérant un temps supplémentaire de l'ordre de 15% dans des scènes à géométrie complexe et contenant plusieurs milieux participants. Des paramètres optionnels sont aussi proposés au graphiste pour affiner le rendu des calques d'ombres.The production of 3D animated motion picture now relies on physically realistic rendering techniques, that simulate light propagation within each scene. In this context, 3D artists must leverage lighting effects to support staging, deploy the film's narrative, and convey its emotional content to viewers. However, the equations that model the behavior of light leave little room for artistic expression. In addition, editing illumination by trial-and-error is tedious due to the long render times that physically realistic rendering requires. To remedy these problems, most animation studios resort to compositing, where artists rework a frame by associating multiple layers exported during rendering. These layers can contain geometric information on the scene, or isolate a particular lighting effect. The advantage of compositing is that interactions take place in real time, and are based on conventional image space operations. Our main contribution is the definition of a new type of layer for compositing, the shadow layer. A shadow layer contains the amount of energy lost in the scene due to the occlusion of light rays by a given object. Compared to existing tools, our approach presents several advantages for artistic editing. First, its physical meaning is straightforward: when a shadow layer is added to the original image, any shadow created by the chosen object disappears. In comparison, a traditional shadow matte represents the ratio of occluded rays at a pixel, a grayscale information that can only serve as an approximation to guide compositing operations. Second, shadow layers are compatible with global illumination: they pick up energy lost from secondary light sources that are scattered at least once in the scene, whereas the current methods only consider primary sources. Finally, we prove the existence of an overestimation of illumination in three different renderers when an artist disables the shadow of an object; our definition fixes this shortcoming. We present a prototype implementation for shadow layers obtained from a few modifications of path tracing, the main rendering algorithm in production. It exports the original image and any number of shadow layers associated with different objects in a single rendering pass, with an additional 15% time in scenes containing complex geometry and multiple participating media. Optional parameters are also proposed to the artist to fine-tune the rendering of shadow layers

    Interactive Ray Tracing Infrastructure

    Get PDF
    In this thesis, I present an approach to develop interactive ray tracing infrastructures for artists. An advantage of ray-tracing is that it provides some essential global illumination (GI) effects such as reflection, refraction and shadows, which are essential for artistic applications. My approach relies on massively paralleled computing power of Graphics Processing Unit (GPU) that can help achieve interactive rendering by providing several orders of magnitude faster computation than conventional CPU-based (Central Processing Unit) rendering. GPU-based rendering makes real time manipulation possible which is also essential for artistic applications. Based on this approach, I have developed an interactive ray tracing infrastructure as a proof of concept. Using this ray tracing infrastructure, artists can interactively manipulate shading and lighting effects through provided Graphical User Interface (GUI) with input controls. Additionally, I have developed a data communication between my ray-tracing infrastructure and commercial modeling and animation software. This addition extended the level of interactivity beyond the infrastructure. This infrastructure can also be extended to develop 3D dynamic environments to obtain any specific art style while providing global illumination effects. It has already been used to create a 3D interactive environment that emulates a given art work with reflections and refractions

    Designing Digital Art and Communication Tools Inspired by Traditional Craft

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Production of 3D animated short films in Unity 5 : can game engines replace the traditional methods?

    Get PDF
    In 3D animation cinema, the elements of a scene are created by artists using computer software. To generate the final result, there must be a conversion (rendering) of the threedimensional models to two-dimensional images (frames) that will later be joined together and edited into a video format. 3D animation films have traditionally been rendered using pre-rendering engines, a time consuming and expensive process that usually requires the use of multiple computers rendering at the same time (render farms), renders which may need to be repeated if the results are not ideal. Videogames, on the other hand, are reactive applications where the player may have different possible courses of action that will generate distinct results. In those cases, it is necessary that the engine waits for the player’s input before it calculates the following frames. To allow for fast calculations in real time, 3D game developers use game engines that incorporate real time rendering methods which can generate images much faster than the prerendering engines mentioned above. To be able to generate a large number of frames per second, there must be an optimization of the entire scene, in order to reduce the number of necessary calculations. That optimization is created by using techniques, practices and tools that are not commonly used by animation cinema professionals. Due to that optimization necessity, videogames always had a lower graphic quality than that of animated films, where each frame is rendered separately and takes as long as necessary to obtain the required result. Physically Based Rendering (PBR) technology is one of the methods incorporated by some rendering engines for the generation of physically accurate results, using calculations that follow the laws of physics as it happens in the real world and creating more realistic images which require less effort, not only from the artist but also from the equipment. The incorporation of PBR in game engines allowed for high graphic quality generated results in real time, gradually closing the visual quality gap between videogames and animated cinema. Recently, game engines such as Unity and Unreal Engine started to be used – mostly by the companies that created the engine, as a proof of concept – for rendering 3D animated films. This could lead to changes in the animation cinema production methods by the studios that, until now, have used traditional pre-rendering methods.No cinema de animação 3D, os elementos de uma cena são criados por artistas através da utilização de programas de computador. Para gerar o resultado final, é necessário fazer-se uma conversão (render) dos modelos tri-dimensionais para imagens bi-dimensionais (frames), que posteriormente serão unidas e editadas para um formato de vídeo. Tradicionalmente, o rendering de filmes de animação 3D é feita através de motores de pre-rendering, um processo demorado e dispendioso que geralmente requer a utilização de múltiplos computadores a trabalhar em simultâneo (render farms), e que poderá ter que ser repetido caso os resultados obtidos não sejam ideais. Os videojogos, por outro lado, são aplicações reactivas, onde o jogador pode ter várias sequências de acções, que poderão gerar resultados distintos. Nesses casos, é necessário o motor de jogo esperar pela acção do jogador antes de calcular as imagens seguintes. Para possibilitar cálculos rápidos em tempo-real, os criadores de jogos 3D usam motores de jogo que incorporam métodos de renderização em tempo-real que conseguem gerar imagens muito mais rápido do que os motores de pre-rendering mencionados acima. Para conseguir gerar um grande número de imagens por segundo, é necessário existir uma optimização de toda a cena, para reduzir o número de cálculos necessários. Essa optimização é criada através da utilização de técnicas, práticas e ferramentas que, geralmente, não são utiliadas por profissionais da área de cinema de animação. Devido a essa necessidade de optimização, os videojogos sempre tiveram uma qualidade gráfica inferior à dos filmes de animação, onde o render de cada imagem é gerado separadamente e pode levar tanto tempo quanto for necessário para obter o resultado desejado. A tecnologia de Rendering Baseado em Física (Physically Based Rendering – PBR) é um dos métodos incorporados por alguns motores de rendering para a geração de resultados físicamente correctos, usando cálculos que seguem as leis da física, tal como acontece no mundo real e criando imagens mais realistas necessitando de menos esforço, não só da parte do artista mas também do equipamento. A incorporação de PBR em motores de jogo possibilitou resultados gerados em tempo-real com grande qualidade gráfica, o que gradualmente vai aproximando a qualidade visual dos videojogos à do cinema de animação. Recentemente, motores de jogo como o Unity e o Unreal Engine começaram a ser utilizados – maioritariamente pelas companhias que criaram o motor de jogo, como prova de conceito – para renderização de filmes de animação 3D. Este passo poderá levar a mudanças nos métodos de produção do cinema de animação em estúdios que, até agora, utilizaram métodos de pré-renderização tradicionais
    corecore